From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF1A8C77B7A for ; Fri, 19 May 2023 11:22:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 87E0F900005; Fri, 19 May 2023 07:22:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8229D900003; Fri, 19 May 2023 07:22:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6EA4A900005; Fri, 19 May 2023 07:22:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 60DA5900003 for ; Fri, 19 May 2023 07:22:39 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2D266C0998 for ; Fri, 19 May 2023 11:22:39 +0000 (UTC) X-FDA: 80806766838.13.CEABE0E Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf30.hostedemail.com (Postfix) with ESMTP id 1A2BB8000D for ; Fri, 19 May 2023 11:22:36 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=J5xOrLSC; dkim=pass header.d=linutronix.de header.s=2020e header.b=xwm2CWlc; spf=pass (imf30.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de; dmarc=pass (policy=none) header.from=linutronix.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684495357; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CncQGef04JUgDq7mqlEvPuIrbShhxxJ7OVQbbGI31JE=; b=7U1lmVwCZlQ9/tATkk2N9fMXL8NMl3WD9BViARtig+IJ04+8rUHAievAHdhuBj4gxYMljs 2razrR3xHBqmVlV/P/yvEw8EfyejN7u4l0nXO8oWSaWih3g30SrftxXqApHF0pXVLNescu y7VeT72X/747rb+Lwu1OeL1FgAYarQg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684495357; a=rsa-sha256; cv=none; b=0AW0uhnv1JkH2oObR9wTJST+IMyrgqRON/eSDxHGsGIC1mgkn51n7ZXMu8/uhes0ZzWk/p zUhHRks9cmqKi+dDLKVBijrQrVICcXC4K4yMAyGkX0OyvJWVirHSFRNqd1vKoLddv8Ff1r kgaYnv7xnAJpUrSFN1QRtb12+hE0Pk4= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=J5xOrLSC; dkim=pass header.d=linutronix.de header.s=2020e header.b=xwm2CWlc; spf=pass (imf30.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de; dmarc=pass (policy=none) header.from=linutronix.de From: Thomas Gleixner DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1684495354; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=CncQGef04JUgDq7mqlEvPuIrbShhxxJ7OVQbbGI31JE=; b=J5xOrLSCCAFrQj5XnW6TUc+BlYiyF9T5eKjv3yslaDq2zFaCYtsFS/2jnRc72w6hQOvf8p RaOsk+BM3jTLjOggLh7+492mmQaW5Dwbwj/6ipLV5vxeg5s9HRS51KmlgKcBpgfl8NvheW NEKXsTt2yhWIiWA9bWnBVfBClPuSGYfwc/HTup5TGBzlNCVsTOBfHgsZcsQJhXHsC0JdUg UZyr+0YioyrLoTLn6MkLEQWMaArIs3tY3NXER8VSMPKnmf9jOhuWfl2ZYFWyf30dvr01z7 SDe6uSn2p07WR2BmqTswt31NBc/V/lPbWaAVgek7omsG9WGJ9DP2/UgWE/5u1A== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1684495354; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=CncQGef04JUgDq7mqlEvPuIrbShhxxJ7OVQbbGI31JE=; b=xwm2CWlc2UZLwGaeTYYC8E3P2PS1EQrdXfNfS+PTEXQiRoLKxJgoNVxSVdoInmf7IYjbPu R8uI3pezSRwRbZCA== To: Baoquan He Cc: "Russell King (Oracle)" , Andrew Morton , linux-mm@kvack.org, Christoph Hellwig , Uladzislau Rezki , Lorenzo Stoakes , Peter Zijlstra , John Ogness , linux-arm-kernel@lists.infradead.org, Mark Rutland , Marc Zyngier , x86@kernel.org Subject: Re: Excessive TLB flush ranges In-Reply-To: References: <87r0rg93z5.ffs@tglx> <87ilcs8zab.ffs@tglx> <87fs7w8z6y.ffs@tglx> <874joc8x7d.ffs@tglx> <87r0rg73wp.ffs@tglx> <87edng6qu8.ffs@tglx> <87y1ln5md2.ffs@tglx> Date: Fri, 19 May 2023 13:22:33 +0200 Message-ID: <875y8o5zwm.ffs@tglx> MIME-Version: 1.0 Content-Type: text/plain X-Rspamd-Queue-Id: 1A2BB8000D X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: zcxjeetpi49c9metznmjgpb81fahokz5 X-HE-Tag: 1684495356-46774 X-HE-Meta: U2FsdGVkX19OpWhEtUHU1VYgTcRMyH6ue8J6ENf3pMbwK4a6xEx/NPrU16NKYHtoRZUMTT8XDHLgkFKNf3nyH1O1c1l7KGlPwgXw6t9KN190tw+KTMwsrF9ZCnjTMkqqV+1SSkH0bnmN5DoMNhXlqQKZoXTe/v2P5un0q3qZ7Dhl9657kUrTCNzxHvpMpyJuwHDFF1bnGoU80pT0RUCtdVa87gw4xg5HycFP2oM4a8CAHEhFFuRJMg6vUPojx0CiI4vKHjv/bQv8Qnh2n0nchPHsgpDpMJa9/MxcquiUuAl2Cx/h1wFdmNpkIMuKyZv5fEX/5R9nqHnBeeFBGoWVNOK6SuPCq69Y8+Wau0kG4TWmrEfEeEmpm3GJCE+H6a3YkJTVggDVfHfh4VI6oB0cXsj0DZq7tYnvLpczwOvPYKUiyA5zfaIyMuAn3z+YTn9sAFkvVwYj1P2wajqnSo+q5mk8qRxtW0/PAOZtPIdsXUvtNiyuu0xWQoyJnerxRxGv4Do97oTHMm3Y90A+UUfEG9iBWx4eefuVA/xIvIBa33ZmRaDaqTzjo7ForoHvndL2QHW6GYLpyzGzt1jy1os2uuD9nQFFVo5RldTVej0Tp0pXRVZ9CFaBmgYaXoBSFwWAdFYaTYUa9mkUbwsM+u1ZPxANQPO0/xR51AIm2HMFbSY10KI3FXWsIXOqWgrJlCLlqKvDYtHtX0KcpmDb8eSSaRAFOE8juze0ilC1QAQNbB69r02D5cBySiYkuxKbvKvP6U08ejaarG7Dr2SbQPH7Ux3L3Wkq4yUq9+ud5TbxD5NtsQqgAUQgKBUbJolzShnPt1JKu86+FTIDPgw+iYzd10my6CwMgBvK5xRh6VVDDqLX41hoWaVIPVToCQN4BQAo5ITzuGry4p4vNMRP1mILWExniTaX0dBBjjicLWAwSxaLTuR5v4bXkB3m5Mayro9BPlfwbGqAyb0XPBOzVZe pug3RALR uLPpepBZDMovHwlVjPUkG7YTDuD7JR8sSSp2fNH1lzREs/FE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, May 17 2023 at 18:52, Baoquan He wrote: > On 05/17/23 at 11:38am, Thomas Gleixner wrote: >> On Tue, May 16 2023 at 21:03, Thomas Gleixner wrote: >> > >> > Aside of that, if I read the code correctly then if there is an unmap >> > via vb_free() which does not cover the whole vmap block then vb->dirty >> > is set and every _vm_unmap_aliases() invocation flushes that dirty range >> > over and over until that vmap block is completely freed, no? >> >> Something like the below would cure that. >> >> While it prevents that this is flushed forever it does not cure the >> eventually overly broad flush when the block is completely dirty and >> purged: >> >> Assume a block with 1024 pages, where 1022 pages are already freed and >> TLB flushed. Now the last 2 pages are freed and the block is purged, >> which results in a flush of 1024 pages where 1022 are already done, >> right? > > This is good idea, I am thinking how to reply to your last mail and how > to fix this. While your cure code may not work well. Please see below > inline comment. See below. > One vmap block has 64 pages. > #define VMAP_MAX_ALLOC BITS_PER_LONG /* 256K with 4K pages */ No, VMAP_MAX_ALLOC is the allocation limit for a single vb_alloc(). On 64bit it has at least 128 pages, but can have up to 1024: #define VMAP_BBMAP_BITS_MAX 1024 /* 4MB with 4K pages */ #define VMAP_BBMAP_BITS_MIN (VMAP_MAX_ALLOC*2) and then some magic happens to calculate the actual size #define VMAP_BBMAP_BITS \ VMAP_MIN(VMAP_BBMAP_BITS_MAX, \ VMAP_MAX(VMAP_BBMAP_BITS_MIN, \ VMALLOC_PAGES / roundup_pow_of_two(NR_CPUS) / 16)) which is in a range of (2*BITS_PER_LONG) ... 1024. The actual vmap block size is: #define VMAP_BLOCK_SIZE (VMAP_BBMAP_BITS * PAGE_SIZE) Which is then obviously something between 512k and 4MB on 64bit and between 256k and 4MB on 32bit. >> @@ -2240,13 +2240,17 @@ static void _vm_unmap_aliases(unsigned l >> rcu_read_lock(); >> list_for_each_entry_rcu(vb, &vbq->free, free_list) { >> spin_lock(&vb->lock); >> - if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { >> + if (vb->dirty_max && vb->dirty != VMAP_BBMAP_BITS) { >> unsigned long va_start = vb->va->va_start; >> unsigned long s, e; > > When vb_free() is invoked, it could cause three kinds of vmap_block as > below. Your code works well for the 2nd case, for the 1st one, it may be > not. And the 2nd one is the stuff that we reclaim and put into purge > list in purge_fragmented_blocks_allcpus(). > > 1) > |-----|------------|-----------|-------| > |dirty|still mapped| dirty | free | > > 2) > |------------------------------|-------| > | dirty | free | You sure? The first one is put into the purge list too. /* Expand dirty range */ vb->dirty_min = min(vb->dirty_min, offset); vb->dirty_max = max(vb->dirty_max, offset + (1UL << order)); pages bits dirtymin dirtymax vb_alloc(A) 2 0 - 1 VMAP_BBMAP_BITS 0 vb_alloc(B) 4 2 - 5 vb_alloc(C) 2 6 - 7 So you get three variants: 1) Flush after freeing A vb_free(A) 2 0 - 1 0 1 Flush VMAP_BBMAP_BITS 0 <- correct vb_free(C) 2 6 - 7 6 7 Flush VMAP_BBMAP_BITS 0 <- correct 2) No flush between freeing A and C vb_free(A) 2 0 - 1 0 1 vb_free(C) 2 6 - 7 0 7 Flush VMAP_BBMAP_BITS 0 <- overbroad flush 3) No flush between freeing A, C, B vb_free(A) 2 0 - 1 0 1 vb_free(C) 2 6 - 7 0 7 vb_free(C) 2 2 - 5 0 7 Flush VMAP_BBMAP_BITS 0 <- correct So my quick hack makes it correct for #1 and #3 and prevents repeated flushes of already flushed areas. To prevent #2 you need a bitmap which keeps track of the flushed areas. Thanks, tglx