From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3ECAEE57EB for ; Fri, 8 Sep 2023 11:38:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB7C36B00B2; Fri, 8 Sep 2023 07:38:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D679C6B00B3; Fri, 8 Sep 2023 07:38:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C2F1A6B00B4; Fri, 8 Sep 2023 07:38:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B48776B00B2 for ; Fri, 8 Sep 2023 07:38:54 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 8528AC11FB for ; Fri, 8 Sep 2023 11:38:54 +0000 (UTC) X-FDA: 81213233388.20.1C9E6BE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf12.hostedemail.com (Postfix) with ESMTP id 1675F40016 for ; Fri, 8 Sep 2023 11:38:51 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=X8TNOKbU; spf=pass (imf12.hostedemail.com: domain of bhe@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=bhe@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694173132; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0zIaOHtGtbin1298XbNwBU6vmuDswBWaqwnGwmAt9hY=; b=aUdrw2gd3uAAl1hZqa8SRFkzVir3dy9SyeM9tMylB6zIPfKdopd2moKTT9PllYP1ELrk20 tnbnftRiZ+e2sGhrkVVJU3fN2YTE3Ht9MFnax+8YbavVt/eZXpUKSn5q938wxPIfNS3ymz 4x4y/0jYQ03AgiPlaoGb+hk5rlYvu58= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694173132; a=rsa-sha256; cv=none; b=Td+a5HK/CW3KH9orp3IbzC/pADoHiwZpVV63b0T7HmYg0aEBgGS0sdg945TMNdYy+nSFOw DK2PXYXuwDvxBf9irLwPuR339fwsq0T/FlIot2wt/iui8Ho7BsKFF4thn4I7VIRf9IBcLg P1LBji/cXlPifej5Zp7/lCxhm3gvDt4= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=X8TNOKbU; spf=pass (imf12.hostedemail.com: domain of bhe@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=bhe@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1694173131; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0zIaOHtGtbin1298XbNwBU6vmuDswBWaqwnGwmAt9hY=; b=X8TNOKbUW8ZdcqAW+HiXzIQ4Pyh4e0xf50MLxiGOwN7szzEyaohSQ1V12AL05XTb1olNQa B5ypPREGB+R/dJTa1zaYOjOv7EQm9lVCY0Smdy3H8/ApLGO+/XoYJ5xgMFBOQedFmy9vq4 B+9u7g3rdrh7AOwq5TTVLc01BpZFHLc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-588-V6JqrVugMb25nKy_7t0kOA-1; Fri, 08 Sep 2023 07:38:48 -0400 X-MC-Unique: V6JqrVugMb25nKy_7t0kOA-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6D914101A529; Fri, 8 Sep 2023 11:38:47 +0000 (UTC) Received: from localhost (unknown [10.72.112.24]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 21B5C404119; Fri, 8 Sep 2023 11:38:45 +0000 (UTC) Date: Fri, 8 Sep 2023 19:38:42 +0800 From: Baoquan He To: Uladzislau Rezki Cc: k-hagio-ab@nec.com, "lijiang@redhat.com" , "linux-mm@kvack.org" , Andrew Morton , LKML , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Oleksiy Avramchenko , "kexec@lists.infradead.org" Subject: Re: [PATCH v2 4/9] mm: vmalloc: Remove global vmap_area_root rb-tree Message-ID: References: <20230829081142.3619-1-urezki@gmail.com> <20230829081142.3619-5-urezki@gmail.com> <8939ea67-ca27-1aa5-dfff-37d78ad59bb8@nec.com> <1d613b25-58d8-375b-6ef4-b27bc9b735e3@nec.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Rspamd-Queue-Id: 1675F40016 X-Rspam-User: X-Stat-Signature: upjiwgnt9zydzii1dkdajjjy5ma67thz X-Rspamd-Server: rspam03 X-HE-Tag: 1694173131-539051 X-HE-Meta: U2FsdGVkX1+WmkCqNCJNDvjv62HwtIw/ycN9CVvDEn9FJ5DP2OKmlNUlblsFdLZsxIoBMzF3nlj0quTCzCjl1udqt+EQDYmtD9CkYGl43CXrzFYxZQ9XiZ8ZqGhhLRLafa+WgvsIE57lsXo+N1Y4rRwOqkupzmpd2mPiSgmNA33SFGTgodToHVS8dF/Oc1X7967k3+H6PErrP+sI3WoVsX9uxSMAgloubb1WPYGXf//FYviglgROooObxQIdWwkBo/ugsO8H1+/QF21BOBVUMdQH3oVKRYU7P/MrocbQah9+Q/tDNzLpfH/FFJln0mM17/enXeRndm7HrFtVRgohSMDjH0WBJZ1xJ9XXnuafbKshpTTCFyFL1m2kb7tFf3wWpuCPaIDu9ayArcas3+L15tRNgzoMtVAMm0KdZC10zI5Agb0eSN0YXTzgkQg4ddiCDCyuig4dLEGCCICK4XWS0lGoATDGeqR3dHTHdge0LyY40JKeGySFRafooCvTXtw+BkHAz7HS2ad74psDrUg1B2KCDbcePU270eSt39M2CrJapjAYE6pTzJ2dGyrjRjkDZ+7bQtsrTXo6XF/MDLTVdP+9zKjcDWYJUGYEd9lgCFkISIHI5k5qmR+GoLmSfKA3TyC1jOxDbTn2idVXARtdNBoL+mCIP0fBZSyeLElXq7rdy8zntYG09wPhDecxspnvAuFQttdzR1dTVr62H4jF3/bcM+y8aXXMmBbRLTe+OAxyGfySYcGxbWTHG//jGbMF4XCJnAazgiXhLoxfAd3OIWr9Ea+Z+rD0bstmHEc3DbDAelvKUIXzbL6MSp6aTs2DSUZW4Hdozxcsox5w/rwvhXzttDCuvZwJ+8gYKCwM9s/8lBXzzx56HPtwejeXfIJ2vp12Oey3DtqNim/rYQqa7WtfNdp1TZEpbHtzQAwgDRty7sx+QvKKjoFI8iiikvoL98vDvTqAZsGeWZ/ogwA au+77EzR 05neLVj0ZKN/BpmsGsKhcDLYfKvzHDHeQjySm5S38IPPRnf3hOSx5H0vrLWubJnEg5mvpFQ+aGIK+U84OxYU6LxVY5OTpJkzArkskr1Ahi96MTmDcmjylaZr508rklDTrXiyZX+n9np9TYCqOFIx0fPOR0MSJN/d9Vok/03lTJGrrICdcQwRpt6dFmplHxS4nW+C9QtSOf53z0/XzNiWILFcOYwLDy3g7f8MiCe+3pvUoBjdt+CpGBWdT2b/bQNCiBsvnSKIk83X+YEb2F6nfT2W+YTPRYt2uBq0OzETSCVOHtl9//FJP/H483g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 09/08/23 at 01:25pm, Uladzislau Rezki wrote: > On Fri, Sep 08, 2023 at 02:44:56PM +0800, Baoquan He wrote: > > On 09/08/23 at 05:01am, HAGIO KAZUHITO(萩尾 一仁) wrote: > > > On 2023/09/08 13:43, Baoquan He wrote: > > > > On 09/08/23 at 01:51am, HAGIO KAZUHITO(萩尾 一仁) wrote: > > > >> On 2023/09/07 18:58, Baoquan He wrote: > > > >>> On 09/07/23 at 11:39am, Uladzislau Rezki wrote: > > > >>>> On Thu, Sep 07, 2023 at 10:17:39AM +0800, Baoquan He wrote: > > > >>>>> Add Kazu and Lianbo to CC, and kexec mailing list > > > >>>>> > > > >>>>> On 08/29/23 at 10:11am, Uladzislau Rezki (Sony) wrote: > > > >>>>>> Store allocated objects in a separate nodes. A va->va_start > > > >>>>>> address is converted into a correct node where it should > > > >>>>>> be placed and resided. An addr_to_node() function is used > > > >>>>>> to do a proper address conversion to determine a node that > > > >>>>>> contains a VA. > > > >>>>>> > > > >>>>>> Such approach balances VAs across nodes as a result an access > > > >>>>>> becomes scalable. Number of nodes in a system depends on number > > > >>>>>> of CPUs divided by two. The density factor in this case is 1/2. > > > >>>>>> > > > >>>>>> Please note: > > > >>>>>> > > > >>>>>> 1. As of now allocated VAs are bound to a node-0. It means the > > > >>>>>> patch does not give any difference comparing with a current > > > >>>>>> behavior; > > > >>>>>> > > > >>>>>> 2. The global vmap_area_lock, vmap_area_root are removed as there > > > >>>>>> is no need in it anymore. The vmap_area_list is still kept and > > > >>>>>> is _empty_. It is exported for a kexec only; > > > >>>>> > > > >>>>> I haven't taken a test, while accessing all nodes' busy tree to get > > > >>>>> va of the lowest address could severely impact kcore reading efficiency > > > >>>>> on system with many vmap nodes. People doing live debugging via > > > >>>>> /proc/kcore will get a little surprise. > > > >>>>> > > > >>>>> > > > >>>>> Empty vmap_area_list will break makedumpfile utility, Crash utility > > > >>>>> could be impactd too. I checked makedumpfile code, it relys on > > > >>>>> vmap_area_list to deduce the vmalloc_start value. > > > >>>>> > > > >>>> It is left part and i hope i fix it in v3. The problem here is > > > >>>> we can not give an opportunity to access to vmap internals from > > > >>>> outside. This is just not correct, i.e. you are not allowed to > > > >>>> access the list directly. > > > >>> > > > >>> Right. Thanks for the fix in v3, that is a relief of makedumpfile and > > > >>> crash. > > > >>> > > > >>> Hi Kazu, > > > >>> > > > >>> Meanwhile, I am thinking if we should evaluate the necessity of > > > >>> vmap_area_list in makedumpfile and Crash. In makedumpfile, we just use > > > >>> vmap_area_list to deduce VMALLOC_START. Wondering if we can export > > > >>> VMALLOC_START directly. Surely, the lowest va->va_start in vmap_area_list > > > >>> is a tighter low boundary of vmalloc area and can reduce unnecessary > > > >>> scanning below the lowest va. Not sure if this is the reason people > > > >>> decided to export vmap_area_list. > > > >> > > > >> The kernel commit acd99dbf5402 introduced the original vmlist entry to > > > >> vmcoreinfo, but there is no information about why it did not export > > > >> VMALLOC_START directly. > > > >> > > > >> If VMALLOC_START is exported directly to vmcoreinfo, I think it would be > > > >> enough for makedumpfile. > > > > > > > > Thanks for confirmation, Kazu. > > > > > > > > Then, below draft patch should be enough to export VMALLOC_START > > > > instead, and remove vmap_area_list. > > > > > > also the following entries can be removed. > > > > > > VMCOREINFO_OFFSET(vmap_area, va_start); > > > VMCOREINFO_OFFSET(vmap_area, list); > > > > Right, they are useless now. I updated to remove them in below patch. > > > > From a867fada34fd9e96528fcc5e72ae50b3b5685015 Mon Sep 17 00:00:00 2001 > > From: Baoquan He > > Date: Fri, 8 Sep 2023 11:53:22 +0800 > > Subject: [PATCH] mm/vmalloc: remove vmap_area_list > > Content-type: text/plain > > > > Earlier, vmap_area_list is exported to vmcoreinfo so that makedumpfile > > get the base address of vmalloc area. Now, vmap_area_list is empty, so > > export VMALLOC_START to vmcoreinfo instead, and remove vmap_area_list. > > > > Signed-off-by: Baoquan He > > --- > > Documentation/admin-guide/kdump/vmcoreinfo.rst | 8 ++++---- > > arch/arm64/kernel/crash_core.c | 1 - > > arch/riscv/kernel/crash_core.c | 1 - > > include/linux/vmalloc.h | 1 - > > kernel/crash_core.c | 4 +--- > > kernel/kallsyms_selftest.c | 1 - > > mm/nommu.c | 2 -- > > mm/vmalloc.c | 3 +-- > > 8 files changed, 6 insertions(+), 15 deletions(-) > > > > diff --git a/Documentation/admin-guide/kdump/vmcoreinfo.rst b/Documentation/admin-guide/kdump/vmcoreinfo.rst > > index 599e8d3bcbc3..c11bd4b1ceb1 100644 > > --- a/Documentation/admin-guide/kdump/vmcoreinfo.rst > > +++ b/Documentation/admin-guide/kdump/vmcoreinfo.rst > > @@ -65,11 +65,11 @@ Defines the beginning of the text section. In general, _stext indicates > > the kernel start address. Used to convert a virtual address from the > > direct kernel map to a physical address. > > > > -vmap_area_list > > --------------- > > +VMALLOC_START > > +------------- > > > > -Stores the virtual area list. makedumpfile gets the vmalloc start value > > -from this variable and its value is necessary for vmalloc translation. > > +Stores the base address of vmalloc area. makedumpfile gets this value > > +since is necessary for vmalloc translation. > > > > mem_map > > ------- > > diff --git a/arch/arm64/kernel/crash_core.c b/arch/arm64/kernel/crash_core.c > > index 66cde752cd74..2a24199a9b81 100644 > > --- a/arch/arm64/kernel/crash_core.c > > +++ b/arch/arm64/kernel/crash_core.c > > @@ -23,7 +23,6 @@ void arch_crash_save_vmcoreinfo(void) > > /* Please note VMCOREINFO_NUMBER() uses "%d", not "%x" */ > > vmcoreinfo_append_str("NUMBER(MODULES_VADDR)=0x%lx\n", MODULES_VADDR); > > vmcoreinfo_append_str("NUMBER(MODULES_END)=0x%lx\n", MODULES_END); > > - vmcoreinfo_append_str("NUMBER(VMALLOC_START)=0x%lx\n", VMALLOC_START); > > vmcoreinfo_append_str("NUMBER(VMALLOC_END)=0x%lx\n", VMALLOC_END); > > vmcoreinfo_append_str("NUMBER(VMEMMAP_START)=0x%lx\n", VMEMMAP_START); > > vmcoreinfo_append_str("NUMBER(VMEMMAP_END)=0x%lx\n", VMEMMAP_END); > > diff --git a/arch/riscv/kernel/crash_core.c b/arch/riscv/kernel/crash_core.c > > index 55f1d7856b54..5c39cedd2c5c 100644 > > --- a/arch/riscv/kernel/crash_core.c > > +++ b/arch/riscv/kernel/crash_core.c > > @@ -9,7 +9,6 @@ void arch_crash_save_vmcoreinfo(void) > > VMCOREINFO_NUMBER(phys_ram_base); > > > > vmcoreinfo_append_str("NUMBER(PAGE_OFFSET)=0x%lx\n", PAGE_OFFSET); > > - vmcoreinfo_append_str("NUMBER(VMALLOC_START)=0x%lx\n", VMALLOC_START); > > vmcoreinfo_append_str("NUMBER(VMALLOC_END)=0x%lx\n", VMALLOC_END); > > vmcoreinfo_append_str("NUMBER(VMEMMAP_START)=0x%lx\n", VMEMMAP_START); > > vmcoreinfo_append_str("NUMBER(VMEMMAP_END)=0x%lx\n", VMEMMAP_END); > > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h > > index c720be70c8dd..91810b4e9510 100644 > > --- a/include/linux/vmalloc.h > > +++ b/include/linux/vmalloc.h > > @@ -253,7 +253,6 @@ extern long vread_iter(struct iov_iter *iter, const char *addr, size_t count); > > /* > > * Internals. Don't use.. > > */ > > -extern struct list_head vmap_area_list; > > extern __init void vm_area_add_early(struct vm_struct *vm); > > extern __init void vm_area_register_early(struct vm_struct *vm, size_t align); > > > > diff --git a/kernel/crash_core.c b/kernel/crash_core.c > > index 03a7932cde0a..a9faaf7e5f7d 100644 > > --- a/kernel/crash_core.c > > +++ b/kernel/crash_core.c > > @@ -617,7 +617,7 @@ static int __init crash_save_vmcoreinfo_init(void) > > VMCOREINFO_SYMBOL_ARRAY(swapper_pg_dir); > > #endif > > VMCOREINFO_SYMBOL(_stext); > > - VMCOREINFO_SYMBOL(vmap_area_list); > > + vmcoreinfo_append_str("NUMBER(VMALLOC_START)=0x%lx\n", VMALLOC_START); > > > > #ifndef CONFIG_NUMA > > VMCOREINFO_SYMBOL(mem_map); > > @@ -658,8 +658,6 @@ static int __init crash_save_vmcoreinfo_init(void) > > VMCOREINFO_OFFSET(free_area, free_list); > > VMCOREINFO_OFFSET(list_head, next); > > VMCOREINFO_OFFSET(list_head, prev); > > - VMCOREINFO_OFFSET(vmap_area, va_start); > > - VMCOREINFO_OFFSET(vmap_area, list); > > VMCOREINFO_LENGTH(zone.free_area, MAX_ORDER + 1); > > log_buf_vmcoreinfo_setup(); > > VMCOREINFO_LENGTH(free_area.free_list, MIGRATE_TYPES); > > diff --git a/kernel/kallsyms_selftest.c b/kernel/kallsyms_selftest.c > > index b4cac76ea5e9..8a689b4ff4f9 100644 > > --- a/kernel/kallsyms_selftest.c > > +++ b/kernel/kallsyms_selftest.c > > @@ -89,7 +89,6 @@ static struct test_item test_items[] = { > > ITEM_DATA(kallsyms_test_var_data_static), > > ITEM_DATA(kallsyms_test_var_bss), > > ITEM_DATA(kallsyms_test_var_data), > > - ITEM_DATA(vmap_area_list), > > #endif > > }; > > > > diff --git a/mm/nommu.c b/mm/nommu.c > > index 7f9e9e5a0e12..8c6686176ebd 100644 > > --- a/mm/nommu.c > > +++ b/mm/nommu.c > > @@ -131,8 +131,6 @@ int follow_pfn(struct vm_area_struct *vma, unsigned long address, > > } > > EXPORT_SYMBOL(follow_pfn); > > > > -LIST_HEAD(vmap_area_list); > > - > > void vfree(const void *addr) > > { > > kfree(addr); > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > > index 50d8239b82df..0a02633a9566 100644 > > --- a/mm/vmalloc.c > > +++ b/mm/vmalloc.c > > @@ -729,8 +729,7 @@ EXPORT_SYMBOL(vmalloc_to_pfn); > > > > > > static DEFINE_SPINLOCK(free_vmap_area_lock); > > -/* Export for kexec only */ > > -LIST_HEAD(vmap_area_list); > > + > > static bool vmap_initialized __read_mostly; > > > > /* > > -- > > 2.41.0 > > > Appreciate for your great input. This patch can go as standalone > with slight commit message updating or i can take it and send it > out as part of v3. > > Either way i am totally fine. What do you prefer? Maybe take it together with this patchset in v3. Thanks.