From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22FA9C5478C for ; Fri, 23 Feb 2024 18:55:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A36916B0081; Fri, 23 Feb 2024 13:55:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9BED76B0083; Fri, 23 Feb 2024 13:55:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 838576B0085; Fri, 23 Feb 2024 13:55:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6E3D46B0081 for ; Fri, 23 Feb 2024 13:55:16 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 0830614011C for ; Fri, 23 Feb 2024 18:55:15 +0000 (UTC) X-FDA: 81823971432.16.A49A7FB Received: from mail-lj1-f176.google.com (mail-lj1-f176.google.com [209.85.208.176]) by imf23.hostedemail.com (Postfix) with ESMTP id EBB2814000C for ; Fri, 23 Feb 2024 18:55:13 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=EkvH+6MN; spf=pass (imf23.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.176 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708714514; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pB6SssVeN4e8P+qPOnkLHOXLcyx5Chkn2kKqNO3+YBo=; b=REzvZrjj9Ao6qYG90Eu3FjIAEvofVmK31aVyYqyJ1I9iXvYPDY0IiUO+ybcVWGVFCbdPuM oMLK58zBkN7Si3agyN5f+dAvYavk7nGC4kVt35NSwTBZ5xVodaJpLTabGpTsM+a/qsvlre gQPLlMIPgm9pvN8y8Vd7u6nZ2nw/M/8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708714514; a=rsa-sha256; cv=none; b=anuUkEFyKKxl12qpnZtW8F2GeWLUz/aYlbhzQYL0azilz3GIk2sRCwQQc2LAkTKllCgojX ZEiBisyLxs7tdM1onpyE1fPHAhC9j8dBlxb2xhPh/E9eWxSgeaQLvOg7h4EG5RfMCjBQFm nR8SZs2JuaK9qhsmzekGU3vtTaK8MYY= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=EkvH+6MN; spf=pass (imf23.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.176 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-lj1-f176.google.com with SMTP id 38308e7fff4ca-2d2305589a2so15750921fa.1 for ; Fri, 23 Feb 2024 10:55:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1708714512; x=1709319312; darn=kvack.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:date:from:from:to :cc:subject:date:message-id:reply-to; bh=pB6SssVeN4e8P+qPOnkLHOXLcyx5Chkn2kKqNO3+YBo=; b=EkvH+6MN/rtFGLK4CFtR3zshykAKVpEsZWQvZP4Nj2wrhT4FRW1W9Lfmeaq8GtR43q y0qGZQA+P21A/NNR+965Cvi+zW5tl1AoMvcYYDhjhSuxkd28323XGvEEliQybwdarzxa +aWpSSEeB+YKuIbn6gFpyoDR8Pio7EaBXFZOyAfH1U2QrT9Qn7SISovkfPu/SWdcW929 yUlT3o07zkybTJbAQK12ddzRyFcig0lMsVwh3BC0wud32j+A71ExQW4stDnsVpsiYY99 BhiPLZ0S/6ujLCq90L2tw5hzMbvklNPjv6izEa/np8vvLZi9LA2VtGQy2WnHCVsJK/l1 /54g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708714512; x=1709319312; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:date:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pB6SssVeN4e8P+qPOnkLHOXLcyx5Chkn2kKqNO3+YBo=; b=gUNAXmQ0cC+EYU/ui4cWQaFgPJOh7ZadkJ/geNSwXEagzH/PYPxEhh539uctnHKPT+ ouxA3Pmv4XwR62cL7Yf5DooNuApprDWSn3iZ7u9eL8AeHxRqr9vQAiI3H99E1Xx/I8l0 M8Q09Z1dQRFa8EMyPVFFXFj6D0PXdiScWSuCrToFtY9FXM1m1clNOCVVYNTBC1oELnU9 5dT1nNgBwC84WIeGi2HPi8kck9WdektS2oD2+J/p/+m85WL0OXj267dGGc0fmsGTa1SX Vo8Kzx0x9VOTtBxSbAlDEPOJrCT3xNHFnZLSAwagVvAngB3YSj+fE0g1GDct22tT4/H5 HUOg== X-Forwarded-Encrypted: i=1; AJvYcCWAzwWihBM/FA1v3V38rxYbSCGf1hLSDm4LFnXjtsE33SItZgJ3MrVDJAfAzAEPgz9+1DJlJ8CqzKGwQ26EVEAJ6w4= X-Gm-Message-State: AOJu0YxWQKOAIKtVQdD2pGGK4bsYw2xL4QflCve7Y53fluywRY8CKp3e Av5kWAybwDni+Z8emvGHGSTMVc2fHz68AJg5Y8SiV9HVtopWpvrw X-Google-Smtp-Source: AGHT+IG039YjDQ48PIW08MdE7jp9XbvA3Mct1j3mMUmajx02V7Z/2aIeINiJVVSHnamUWfH913ffuw== X-Received: by 2002:a2e:868a:0:b0:2d2:3a0c:ab36 with SMTP id l10-20020a2e868a000000b002d23a0cab36mr11175lji.35.1708714511748; Fri, 23 Feb 2024 10:55:11 -0800 (PST) Received: from pc638.lan (host-185-121-47-193.sydskane.nu. [185.121.47.193]) by smtp.gmail.com with ESMTPSA id v21-20020a2e9915000000b002d10facb5bfsm2689070lji.97.2024.02.23.10.55.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 10:55:11 -0800 (PST) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Fri, 23 Feb 2024 19:55:09 +0100 To: Baoquan He Cc: Uladzislau Rezki , Pedro Falcato , Matthew Wilcox , Mel Gorman , kirill.shutemov@linux.intel.com, Vishal Moola , Andrew Morton , LKML , Lorenzo Stoakes , Christoph Hellwig , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Oleksiy Avramchenko , linux-mm@kvack.org Subject: Re: [PATCH v3 00/11] Mitigate a vmap lock contention v3 Message-ID: References: <20240102184633.748113-1-urezki@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspamd-Queue-Id: EBB2814000C X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 98p8p8qiynk5ox45de63curenp55s396 X-HE-Tag: 1708714513-204476 X-HE-Meta: U2FsdGVkX19cfFlxV1bvP5OMjbhXHg9Un5//QbtyNSYtJE7uixaR5yAXq3Ph5goxgqz9BxGZDn3SCuGW/lRvyxlDvjGAnnFfgU+owHFVNAmC9dUdm34rB0HCLORx7g//4VJaDhElx55SJk/8O2u8sAanS/5Ih2RImtbseDEvX6s0S+EKW+yBT8fW7xZ4SMR6aAU8kX0TkCoAbK0uh+AHftGGiL3S1JcANO2xuoSVoOKGKQjfIKPtNwis+lt/5x5NLsq0EZMns2Mzn8qpGSLqOF1Ee5aV007a+z1z5RNyvDWnf9QgCcQpXQJX+5Jaiq2VAmr7XgBh+Gl7OavZUrypZ4j9M5NoMtzirOzKVOwBYCkRGVcd6vUsP8ldcfUukkAEfCbLhT3XNiJedRgGnrOMu4c/eJqVBolOARzsn7dfGNcXkbT6oI1gMkzCresbcIgcxUEUC2g0wM+pQ/7HrGltkEXtCEgtdt/Fus7bXFWAzE6+Ni2jH9Su3FjJdwAuYh0BjjJtLRVXIxYO6SDKT1nloIOZzB99s4+nopDhSD/Ek141/YndjO+zmRk0yFGdkF1Wh+ryaxEwGIXTyXQYFwHp+3RY+gY/Aroqd7jX4fALxcLv3nrIkzNn1Ue/l6kZXDrAo2ONXg40u2vRFMVjnozq/9N0JD32892AnYge/Wt3idMh9rG6SLAiY/WRYDnrb716gi7TrcmK4ZZ0WeNMv3B5bKuz9M/Zd8YzSHp8z9ext4H84m1r6nl6TCCXd/hndU3tAiW0PqEAZuCdhtSBH7G653MafOpwHTuj2zbH1Dy7teiwf7V3ucoNUcBqMAd5UJeMXrssJG1tSkA/HI04I1yWYG5kb/MhaIxd8EImBnZubPDpxxQK66helmPCuixEEJqh4WV4SexqIddbx5IKe2pid9Dkw4cItfNaemfDknKsnDHiYvxVpPaKrsAmCsG2Z9SP9E+Ebthyy/I9rwV7Dlo gyVAz+BE To09cDfE8jpC0aosHWqSdYZWFCQMjb1are9v4eG1Vuwt3SprPov3eVMudLU+W/XJ7P5X2HFJY+WrTnAdVYKa13jDJ5qpbKB9Pb48u+Q6Nk9PoRcChkRT/VY/t4IbE/T7yTa9jBUjQtPp4ZNpizMnCOd3fLlucSOUL47unr2sNufOJLEHC4tZWIoLU0b7TsWeYLLjYNZJ7rz0J5Nr+HldPsR9krgm/JOTEdvNJi8xRUNCXzbGuWRwgg2FEAbeMGUYAIr5Rp9V5NpYegjP1xuVL4kqrfQna6FD49kiqP5k8/aT2aovXaaQEE4iNnDCh83CSVrFKckOeQxZXaTt+aXre0DFfuuV288DBBAaZ+5YF6cf+oTEI6Xz+p3szLgUug9kWw162IimtAf6kNrQgWPau0pUDBTnc2nTMcNWkNpvn2UwVPI+w9nc/Vl06ByDP9n8PAo0U/R+Rnd2D99FU37CZtUEAQgFqiAWxVPegI22wIpYVaGWfQfd1cPbnaB8OYMDlZn1deMrteZs0U+I= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Feb 23, 2024 at 11:57:25PM +0800, Baoquan He wrote: > On 02/23/24 at 12:06pm, Uladzislau Rezki wrote: > > > On 02/23/24 at 10:34am, Uladzislau Rezki wrote: > > > > On Thu, Feb 22, 2024 at 11:15:59PM +0000, Pedro Falcato wrote: > > > > > Hi, > > > > > > > > > > On Thu, Feb 22, 2024 at 8:35 AM Uladzislau Rezki wrote: > > > > > > > > > > > > Hello, Folk! > > > > > > > > > > > >[...] > > > > > > pagetable_alloc - gets increased as soon as a higher pressure is applied by > > > > > > increasing number of workers. Running same number of jobs on a next run > > > > > > does not increase it and stays on same level as on previous. > > > > > > > > > > > > /** > > > > > > * pagetable_alloc - Allocate pagetables > > > > > > * @gfp: GFP flags > > > > > > * @order: desired pagetable order > > > > > > * > > > > > > * pagetable_alloc allocates memory for page tables as well as a page table > > > > > > * descriptor to describe that memory. > > > > > > * > > > > > > * Return: The ptdesc describing the allocated page tables. > > > > > > */ > > > > > > static inline struct ptdesc *pagetable_alloc(gfp_t gfp, unsigned int order) > > > > > > { > > > > > > struct page *page = alloc_pages(gfp | __GFP_COMP, order); > > > > > > > > > > > > return page_ptdesc(page); > > > > > > } > > > > > > > > > > > > Could you please comment on it? Or do you have any thought? Is it expected? > > > > > > Is a page-table ever shrink? > > > > > > > > > > It's my understanding that the vunmap_range helpers don't actively > > > > > free page tables, they just clear PTEs. munmap does free them in > > > > > mmap.c:free_pgtables, maybe something could be worked up for vmalloc > > > > > too. > > > > > > > > > Right. I see that for a user space, pgtables are removed. There was a > > > > work on it. > > > > > > > > > > > > > > I would not be surprised if the memory increase you're seeing is more > > > > > or less correlated to the maximum vmalloc footprint throughout the > > > > > whole test. > > > > > > > > > Yes, the vmalloc footprint follows the memory usage. Some uses cases > > > > map lot of memory. > > > > > > The 'nr_threads=256' testing may be too radical. I took the test on > > > a bare metal machine as below, it's still running and hang there after > > > 30 minutes. I did this after system boot. I am looking for other > > > machines with more processors. > > > > > > [root@dell-r640-068 ~]# nproc > > > 64 > > > [root@dell-r640-068 ~]# free -h > > > total used free shared buff/cache available > > > Mem: 187Gi 18Gi 169Gi 12Mi 262Mi 168Gi > > > Swap: 4.0Gi 0B 4.0Gi > > > [root@dell-r640-068 ~]# > > > > > > [root@dell-r640-068 linux]# tools/testing/selftests/mm/test_vmalloc.sh run_test_mask=127 nr_threads=256 > > > Run the test with following parameters: run_test_mask=127 nr_threads=256 > > > > > Agree, nr_threads=256 is a way radical :) Mine took 50 minutes to > > complete. So wait more :) > > Right, mine could take the similar time to finish that. I got a machine > with 288 cpus, see if I can get some clues. When I go through the code > flow, suddenly realized it could be drain_vmap_area_work which is the > bottle neck and cause the tremendous page table pages costing. > > On your system, there's 64 cpus. then > > nr_lazy_max = lazy_max_pages() = 7*32M = 224M; > > So with nr_threads=128 or 256, it's so easily getting to the nr_lazy_max > and triggering drain_vmap_work(). When cpu resouce is very limited, the > lazy vmap purging will be very slow. While the alloc/free in lib/tet_vmalloc.c > are going far faster and more easily then vmap reclaiming. If old va is not > reused, new va is allocated and keep extending, the new page table surely > need be created to cover them. > > I will take testing on the system with 288 cpus, will update if testing > is done. > diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 12caa794abd4..a90c5393d85f 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1754,6 +1754,8 @@ size_to_va_pool(struct vmap_node *vn, unsigned long size) return NULL; } +static unsigned long lazy_max_pages(void); + static bool node_pool_add_va(struct vmap_node *n, struct vmap_area *va) { @@ -1763,6 +1765,9 @@ node_pool_add_va(struct vmap_node *n, struct vmap_area *va) if (!vp) return false; + if (READ_ONCE(vp->len) > lazy_max_pages()) + return false; + spin_lock(&n->pool_lock); list_add(&va->list, &vp->head); WRITE_ONCE(vp->len, vp->len + 1); @@ -2170,9 +2175,9 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end, INIT_WORK(&vn->purge_work, purge_vmap_node); if (cpumask_test_cpu(i, cpu_online_mask)) - schedule_work_on(i, &vn->purge_work); + queue_work_on(i, system_highpri_wq, &vn->purge_work); else - schedule_work(&vn->purge_work); + queue_work(system_highpri_wq, &vn->purge_work); nr_purge_helpers--; } else { We need this. This settles it back to a normal PTE-usage. Tomorrow i will check if cache-len should be limited. I tested on my 64 CPUs system with radical 256 kworkers. It looks good. -- Uladzislau Rezki