From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E03F5C433DB for ; Thu, 14 Jan 2021 11:52:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7E19B23A40 for ; Thu, 14 Jan 2021 11:52:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7E19B23A40 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9EB6A8D00DA; Thu, 14 Jan 2021 06:52:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 99B998D008E; Thu, 14 Jan 2021 06:52:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 88A248D00DA; Thu, 14 Jan 2021 06:52:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0230.hostedemail.com [216.40.44.230]) by kanga.kvack.org (Postfix) with ESMTP id 710518D008E for ; Thu, 14 Jan 2021 06:52:55 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 34D37180ACF7F for ; Thu, 14 Jan 2021 11:52:55 +0000 (UTC) X-FDA: 77704219110.18.stage57_60000a127526 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 0F5B4100ED0DE for ; Thu, 14 Jan 2021 11:52:55 +0000 (UTC) X-HE-Tag: stage57_60000a127526 X-Filterd-Recvd-Size: 3871 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Thu, 14 Jan 2021 11:52:54 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 2723CB7A5; Thu, 14 Jan 2021 11:52:53 +0000 (UTC) Date: Thu, 14 Jan 2021 12:52:48 +0100 From: Oscar Salvador To: Muchun Song Cc: Mike Kravetz , Jonathan Corbet , Thomas Gleixner , mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, Peter Zijlstra , viro@zeniv.linux.org.uk, Andrew Morton , paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, Randy Dunlap , oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, Mina Almasry , David Rientjes , Matthew Wilcox , Michal Hocko , "Song Bao Hua (Barry Song)" , David Hildenbrand , HORIGUCHI =?utf-8?B?TkFPWUEo5aCA5Y+jIOebtOS5nyk=?= , Xiongchun duan , linux-doc@vger.kernel.org, LKML , Linux Memory Management List , linux-fsdevel Subject: Re: [External] Re: [PATCH v12 04/13] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page Message-ID: <20210114115248.GA24592@localhost.localdomain> References: <20210106141931.73931-1-songmuchun@bytedance.com> <20210106141931.73931-5-songmuchun@bytedance.com> <20210112080453.GA10895@linux> <20210113092028.GB24816@linux> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jan 14, 2021 at 06:54:30PM +0800, Muchun Song wrote: > I think this approach may be only suitable for generic huge page only. > So we can implement it only for huge page. > > Hi Oscar, > > What's your opinion about this? I tried something like: static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct vmemmap_remap_walk *walk) { pte_t *pte; pte = pte_offset_kernel(pmd, addr); if (!walk->reuse_page) { BUG_ON(pte_none(*pte)); walk->reuse_page = pte_page(*pte++); addr = walk->remap_start; } for (; addr != end; addr += PAGE_SIZE, pte++) { BUG_ON(pte_none(*pte)); walk->remap_pte(pte, addr, walk); } } void vmemmap_remap_free(unsigned long start, unsigned long end, unsigned long reuse) { LIST_HEAD(vmemmap_pages); struct vmemmap_remap_walk walk = { .remap_pte = vmemmap_remap_pte, .reuse_addr = reuse, .remap_start = start, .vmemmap_pages = &vmemmap_pages, }; BUG_ON(start != reuse + PAGE_SIZE); vmemmap_remap_range(reuse, end, &walk); free_vmemmap_page_list(&vmemmap_pages); } but it might overcomplicate things and I am not sure it is any better. So I am fine with keeping it as is. Should another user come in the future, we can always revisit. Maybe just add a little comment in vmemmap_pte_range(), explaining while we are "+= PAGE_SIZE" for address and I would like to see a comment in vmemmap_remap_free why the BUG_ON and more important what it is checking. -- Oscar Salvador SUSE L3