From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0F24C77B73 for ; Wed, 24 May 2023 09:25:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3DBB7900004; Wed, 24 May 2023 05:25:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 364FF900003; Wed, 24 May 2023 05:25:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E095900004; Wed, 24 May 2023 05:25:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 0B27B900003 for ; Wed, 24 May 2023 05:25:58 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C27F540930 for ; Wed, 24 May 2023 09:25:57 +0000 (UTC) X-FDA: 80824616754.16.D45378A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf05.hostedemail.com (Postfix) with ESMTP id 66779100010 for ; Wed, 24 May 2023 09:25:55 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="cTce/Pg0"; spf=pass (imf05.hostedemail.com: domain of bhe@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=bhe@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684920355; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RzaaXKRFyIGgRgSDb7USSHhK1LQImS6j+YDiATud5Bk=; b=fk3EYXUxAivrie0R6ovVbH1ti1I1v9UtbYdtV20O6n3iy72rnZ8T1k2iEuXNcGFZN1SZfy FvELRtV4I/28hNqcx283Nk47RyBJSFpEMWLlQGDfbEbANGxN8L+vi0T8qdmL18T3UWYnsS 0ewspOwS+3GBcBKE46Akux55ks4oWPc= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="cTce/Pg0"; spf=pass (imf05.hostedemail.com: domain of bhe@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=bhe@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684920355; a=rsa-sha256; cv=none; b=d3oxWcfvqx/nH4qHa+s2O279Do41DonyL2PNB+8LhItuYYMZB07ex6vxAzCwyodnV3Ey35 Xx1J0+SgnCTgNFi0qDnx8UX3pLzdFlVA/NeKaGfBibZMQi7l44eXqo+4qEi5QbFdK2UL+w JjawyvfKHsOMqx171GeqSQ6gGnrEhfk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684920354; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=RzaaXKRFyIGgRgSDb7USSHhK1LQImS6j+YDiATud5Bk=; b=cTce/Pg0Zg4hb9P7M+jnv6YujkBVkA8c36H9TyBq+x5MUHhf328G4XcKSamgCvnemK+tTp P2dbKrgdUFXsFaf5QDdCVB7A5CIsr6gGQ+Er5SoQpTQbkTtB+G2P1Op9H6DxUOW6vpdAIS PM7xfJBa3B99sKyzIYcq3kO5p+WEHSA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-96-d0-uOHV1OtCdzappQHiWXw-1; Wed, 24 May 2023 05:25:51 -0400 X-MC-Unique: d0-uOHV1OtCdzappQHiWXw-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9E611800141; Wed, 24 May 2023 09:25:50 +0000 (UTC) Received: from localhost (ovpn-12-35.pek2.redhat.com [10.72.12.35]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E6748492B0A; Wed, 24 May 2023 09:25:49 +0000 (UTC) Date: Wed, 24 May 2023 17:25:46 +0800 From: Baoquan He To: Thomas Gleixner Cc: linux-mm@kvack.org, Andrew Morton , Christoph Hellwig , Uladzislau Rezki , Lorenzo Stoakes , Peter Zijlstra Subject: Re: [patch 1/6] mm/vmalloc: Prevent stale TLBs in fully utilized blocks Message-ID: References: <20230523135902.517032811@linutronix.de> <20230523140002.575854344@linutronix.de> MIME-Version: 1.0 In-Reply-To: <20230523140002.575854344@linutronix.de> X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Rspamd-Queue-Id: 66779100010 X-Rspam-User: X-Stat-Signature: rw7qz4h6zcsfadxr1hcbt5cyqoiwkedg X-Rspamd-Server: rspam01 X-HE-Tag: 1684920355-69522 X-HE-Meta: U2FsdGVkX18nJZUV2m5ax+Uq/prIR+Y9ZwjnX1nf80IDtIXPZDdW4agSHbWQhxzozmErm8U19MqBd2nNrWLQiGss65k/0vKS71DUbKmPC2fylNjerIDkvFn3AThfJlG3Te+05Xnmv/R+Miw1RkTD7eu/PEG7bBBPiGhTW6FxkaTFTjm0t6zJnucr6hM97oLXjA33qVXEIgDcGQwNsA9DPUKpqT+Glz54yDnYMGIt/h5Kmdex/qmOzCn43AP+Ol5Yqjh5Y6ydj3BZv3l/gzXSaCrmneabgDY8Ts6hv5C+Ht8p2Bb4UG6P8+x9Y0XX2F7+ZIS6zRgdrcd7T//+HF1PSwpKEWGfighaGEFbsLcQL3+Nxig84UK2bClyAkdBAFkh6t90k7RColSwWy4IwylzNK0PwUqE8tJqLpmxZNo2NHaRm30zXM2YsJtJ2R/Dl9/dWWS89mwGOp1LeNu+qZ4Lr3hx2mEKcHM6IfNd4uzcPAcNz856z19YAPLDsg4FqtRyNE0mb3s1yBHPk8hwAE340PKGOh5TFUWsQmeDD9lU4zyQbLogTetBMqJzi/TMkAkCBtqbdSAHFeX0gNm0XjiYnwPTJMOHYpzis6BrZGUhGnRiYznmkQWN4lCGG0KcLq9UosLXy+OkmbLVRd6q12LiluKpde33HwfGdKRJ/bvaSI5iYYE94UqwEfk1Zzk0FyxjFwYR6eza5WEFsWKot/RCRTxVv9elgIyXJeXbbzYX4llkkn6+YJ+LpX24ysU3IDxrhGdvVoqWE/H+jqNvMZMfb7iujU+YThpYTDStseiSfwoFbYgo/xwXuDtJ4nTEe3dyxDfZREfu6NX3cU/XLSJkQBXnIRj/Ozc4voFc4D+nKmd+F79dv1FJGg08kGAHwzE3if83cQ81M9ks9gHvu/ew4rPUOM1PT2CiEfa/jeAGTnKaxzXZMLaJ/MjmK7ey9/DlboJ7zIQyRElzeDkAb9M Fxv2ypev aP/GVrQEv2kBpB43qEtyXjiM0GnIPST8Els7BwMdfcbvaErSBVJfykOz7Dm740n8W+mVdxI0cMJuJgG4je7ta8a7gkYmIyL3QlhMQe3EyIcL74tMKIgMeJ4pWPtvozKnZSRDGG/3FDiWOelb/XrxF/Q1HbjMtt30NPzqIDF8DAYoyz6TF47bqbsZIlbtFt4VgzbAL9LuOLlV5DEU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 05/23/23 at 04:02pm, Thomas Gleixner wrote: > _vm_unmap_aliases() is used to ensure that no unflushed TLB entries for a > page are left in the system. This is required due to the lazy TLB flush > mechanism in vmalloc. > > This is tried to achieve by walking the per CPU free lists, but those do > not contain fully utilized vmap blocks because they are removed from the > free list once the blocks free space became zero. The problem description is not accurate. This is tried to achieve for va associated with vmap_block by walking the per CPU free lists, those fully utilized vmap blocks can still be flushed in __purge_vmap_area_lazy() by calculating the [min:max] of purge_vmap_area_list, because va of vmap_blocks will be added to purge_vmap_area_list too via vb_free(). If we finally exclude the vmap_block va from purge list in __purge_vmap_area_lazy(), then we can say that stale TLBs is missed to flush. No? IMHO, this is a preparation work for the vmap_block va excluding from purge list flushing. Please correct me if I am wrong. > > So the per CPU list iteration does not find the block and if the page was > mapped via such a block and the TLB has not yet been flushed, the guarantee > of _vm_unmap_aliases() that there are no stale TLBs after returning is > broken: > > x = vb_alloc() // Removes vmap_block from free list because vb->free became 0 > vb_free(x) // Unmaps page and marks in dirty_min/max range > > // Page is reused > vm_unmap_aliases() // Can't find vmap block with the dirty space -> FAIL > > So instead of walking the per CPU free lists, walk the per CPU xarrays > which hold pointers to _all_ active blocks in the system including those > removed from the free lists. > > Signed-off-by: Thomas Gleixner > --- > mm/vmalloc.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -2236,9 +2236,10 @@ static void _vm_unmap_aliases(unsigned l > for_each_possible_cpu(cpu) { > struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu); > struct vmap_block *vb; > + unsigned long idx; > > rcu_read_lock(); > - list_for_each_entry_rcu(vb, &vbq->free, free_list) { > + xa_for_each(&vbq->vmap_blocks, idx, vb) { > spin_lock(&vb->lock); > if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { > unsigned long va_start = vb->va->va_start; >