From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74995C5478C for ; Fri, 23 Feb 2024 15:57:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EDBED6B0080; Fri, 23 Feb 2024 10:57:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E8B856B0081; Fri, 23 Feb 2024 10:57:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D53AD6B0082; Fri, 23 Feb 2024 10:57:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C60066B0080 for ; Fri, 23 Feb 2024 10:57:37 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 99B6DA114E for ; Fri, 23 Feb 2024 15:57:37 +0000 (UTC) X-FDA: 81823523754.04.E45977D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf19.hostedemail.com (Postfix) with ESMTP id D06C21A0014 for ; Fri, 23 Feb 2024 15:57:35 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=QPmoMQYK; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf19.hostedemail.com: domain of bhe@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=bhe@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708703856; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=P+cR72EL54R2xpPEHe1/1VXURnv19BSEwvnemyDV8zU=; b=Kxqx/8kaqodGzW4GI4bhaLAlLRTePFRXOrXs/9luQY+fyoAhBnuOQgJeUe4fx6zMhOyl43 tXs3drVIJwLeE2bdXntBI9w1B3KJyFfvy9dtpiLLCIqaALArerothi3fndDfiXffdzDDzH yDz7EuFgx/9iBeDsEWDF8cdkO69SUBQ= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=QPmoMQYK; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf19.hostedemail.com: domain of bhe@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=bhe@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708703856; a=rsa-sha256; cv=none; b=l13WwMADweBAnggjcECYKqYpEoXBRGscfIsbJGG+C+G5Qi23sN54y4vTOV3Lpqj0UQDYlp A2cB0sSnMhfbyMMq8be5iuZq07uG2D+ehY2k3B+qt2k7abEMr0XuWoQhpVaTuOpf+PPUli udOK50StPMkA0A/w1sORvrX8XM5k3lE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1708703855; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=P+cR72EL54R2xpPEHe1/1VXURnv19BSEwvnemyDV8zU=; b=QPmoMQYKeRvZSvZxtQzTc7ADxljac8A7XUjhM4X8F+7B7Ji0y3Olm8UHdnnNvwtjtm6Swu F+BSjz9vqA2X1eUTncU/S7Lc4vOa3IRuxQgaGUm5o84yncD20yUVZAYcI66S1UCueR6/4f Z3IWyKRweQ409GZr0vVyUJpigWYQqzo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-635-ahSBYWehOVOzYxLguwbzWw-1; Fri, 23 Feb 2024 10:57:31 -0500 X-MC-Unique: ahSBYWehOVOzYxLguwbzWw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6AABB185A780; Fri, 23 Feb 2024 15:57:30 +0000 (UTC) Received: from localhost (unknown [10.72.116.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 43026112132A; Fri, 23 Feb 2024 15:57:28 +0000 (UTC) Date: Fri, 23 Feb 2024 23:57:25 +0800 From: Baoquan He To: Uladzislau Rezki Cc: Pedro Falcato , Matthew Wilcox , Mel Gorman , kirill.shutemov@linux.intel.com, Vishal Moola , Andrew Morton , LKML , Lorenzo Stoakes , Christoph Hellwig , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Oleksiy Avramchenko , linux-mm@kvack.org Subject: Re: [PATCH v3 00/11] Mitigate a vmap lock contention v3 Message-ID: References: <20240102184633.748113-1-urezki@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 X-Rspamd-Queue-Id: D06C21A0014 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: qe94kw8sbrk5j7qxbn4rut59zbh7x1ic X-HE-Tag: 1708703855-431700 X-HE-Meta: U2FsdGVkX19e1+++p9CAbG9wEc6TZZiXyqZrWjl0dXkkuH9rloIFO7wjyiud3OYgCJ3FGBko8zsYhV7wLLk99LpeJIym6DzBn+8K14t0oehwnXRmVj/3FE8gis9g6A8q2C1X4l3lLCAq7qBcT4Ue7A1vl0/9tGVtrNalnlqDx46cgjk/4OHv5SNFoO8mbURsJWo/B1tD10sOaLBfaIOF3N34FAmIoDt1Zh1qWY4+2Z1QeJ0UJrhY7zKVsEKTyyDh/Cf7QfChz1RrPPuLpw/7O9+gOXbN15hv++Ly/d4dZRWiH8ZLlteQtQgCPV5LkcxIZAQCu6q9CSteVudCKc835EBRvep9yTh00t9SEECRH14xzWQ8Ld8dx26inu9kkFDa1hO86EE7d3wLww8aZktijD7DMagX/PSOHQuIIJyCmJTWi4qriJHZnAaUoRQaOrGoG8stgmri+yFvn2ZdqllXUVYDl/EkXgFD+65BQGUOUokNg61tsIn++FeZLcaFUtSAv/2jSUW1YrXUan614eb6eeAVno6KIYNio9HL1+NgLADi4vDxxbBvbC5RomoXYSUI5Vf2aNvbC/HWONOLPBA+efSnRue7nrpG2BEjOHE3qGkms5/B557L/F0N5QHucn16kPqEnFt8az7G6YWvqETuNkX7D+N6iFTix/4wOx9FxSGxOPJ965xPDg3fP6oAQyKc6O36UmFKGXAf6s4Rkl+dndsyT6imTBuwtAZVD/ILDXOpM0eHa4O3DzVfJBLgzy5r80bhxeFaH5C5brx6H1i9S9V45O+ubIe75Pt4nZRLTyv2E0F4pdXggdvpFJs/8MWgt1CycggSIzcs/gEmNlPxRxnIAxbEqnox5Fy49fnLQ6hVhuzvv4OMHgRK/T+gK30RGUKfQwcYcde/zcGS4WJe72QipiXBSBkKhh6E8rilCmoSELDIFUONFTEP71ge3iBHvh/a3v+t+N3MVFiToZy 2LRTcZLV KD1oz X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 02/23/24 at 12:06pm, Uladzislau Rezki wrote: > > On 02/23/24 at 10:34am, Uladzislau Rezki wrote: > > > On Thu, Feb 22, 2024 at 11:15:59PM +0000, Pedro Falcato wrote: > > > > Hi, > > > > > > > > On Thu, Feb 22, 2024 at 8:35 AM Uladzislau Rezki wrote: > > > > > > > > > > Hello, Folk! > > > > > > > > > >[...] > > > > > pagetable_alloc - gets increased as soon as a higher pressure is applied by > > > > > increasing number of workers. Running same number of jobs on a next run > > > > > does not increase it and stays on same level as on previous. > > > > > > > > > > /** > > > > > * pagetable_alloc - Allocate pagetables > > > > > * @gfp: GFP flags > > > > > * @order: desired pagetable order > > > > > * > > > > > * pagetable_alloc allocates memory for page tables as well as a page table > > > > > * descriptor to describe that memory. > > > > > * > > > > > * Return: The ptdesc describing the allocated page tables. > > > > > */ > > > > > static inline struct ptdesc *pagetable_alloc(gfp_t gfp, unsigned int order) > > > > > { > > > > > struct page *page = alloc_pages(gfp | __GFP_COMP, order); > > > > > > > > > > return page_ptdesc(page); > > > > > } > > > > > > > > > > Could you please comment on it? Or do you have any thought? Is it expected? > > > > > Is a page-table ever shrink? > > > > > > > > It's my understanding that the vunmap_range helpers don't actively > > > > free page tables, they just clear PTEs. munmap does free them in > > > > mmap.c:free_pgtables, maybe something could be worked up for vmalloc > > > > too. > > > > > > > Right. I see that for a user space, pgtables are removed. There was a > > > work on it. > > > > > > > > > > > I would not be surprised if the memory increase you're seeing is more > > > > or less correlated to the maximum vmalloc footprint throughout the > > > > whole test. > > > > > > > Yes, the vmalloc footprint follows the memory usage. Some uses cases > > > map lot of memory. > > > > The 'nr_threads=256' testing may be too radical. I took the test on > > a bare metal machine as below, it's still running and hang there after > > 30 minutes. I did this after system boot. I am looking for other > > machines with more processors. > > > > [root@dell-r640-068 ~]# nproc > > 64 > > [root@dell-r640-068 ~]# free -h > > total used free shared buff/cache available > > Mem: 187Gi 18Gi 169Gi 12Mi 262Mi 168Gi > > Swap: 4.0Gi 0B 4.0Gi > > [root@dell-r640-068 ~]# > > > > [root@dell-r640-068 linux]# tools/testing/selftests/mm/test_vmalloc.sh run_test_mask=127 nr_threads=256 > > Run the test with following parameters: run_test_mask=127 nr_threads=256 > > > Agree, nr_threads=256 is a way radical :) Mine took 50 minutes to > complete. So wait more :) Right, mine could take the similar time to finish that. I got a machine with 288 cpus, see if I can get some clues. When I go through the code flow, suddenly realized it could be drain_vmap_area_work which is the bottle neck and cause the tremendous page table pages costing. On your system, there's 64 cpus. then nr_lazy_max = lazy_max_pages() = 7*32M = 224M; So with nr_threads=128 or 256, it's so easily getting to the nr_lazy_max and triggering drain_vmap_work(). When cpu resouce is very limited, the lazy vmap purging will be very slow. While the alloc/free in lib/tet_vmalloc.c are going far faster and more easily then vmap reclaiming. If old va is not reused, new va is allocated and keep extending, the new page table surely need be created to cover them. I will take testing on the system with 288 cpus, will update if testing is done.