From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60158C77B7A for ; Fri, 19 May 2023 23:46:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7F54C900004; Fri, 19 May 2023 19:46:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7A5CB900003; Fri, 19 May 2023 19:46:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 66D54900004; Fri, 19 May 2023 19:46:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 56592900003 for ; Fri, 19 May 2023 19:46:25 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 297F4AE6CF for ; Fri, 19 May 2023 23:46:25 +0000 (UTC) X-FDA: 80808641130.03.79C964D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf24.hostedemail.com (Postfix) with ESMTP id 604E7180007 for ; Fri, 19 May 2023 23:46:22 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=gJow3HkV; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf24.hostedemail.com: domain of bhe@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=bhe@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684539983; a=rsa-sha256; cv=none; b=aWHTIg05BaBsSbVdwo1pNl6DKMviGgmq/odNkmTrN7TuAZJl7Qh1ZET2torbgsNUQ402z4 i2J0WEge5I5kf3TdSiP6brca8HM+jn/M3nsosyBII72xW0WGWZUDemVnz5HUqhLQst/ZUB c7QsW12e72+aFC+165owjwJdeiL+wLw= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=gJow3HkV; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf24.hostedemail.com: domain of bhe@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=bhe@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684539983; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JaM16tRdespLPw+WMI6r/AX7vxXpMim6NJkM97PK9bY=; b=c+oJlDPQrNRGyM+GABkPkXdTnRjxN5EUrCK7SM2D4QLxH38Fcxv3Tv6VnBOj1lfEugzmfn 0L7PBPuK82l7RS8mQ6WHoOf3hnmTYrBifu5f4l/Vn2wUS6R7JO2j0u22OxwPwtriZ7f6iN CZPaVkhXLVbe2XYht8r0zThbNFupHHY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684539981; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=JaM16tRdespLPw+WMI6r/AX7vxXpMim6NJkM97PK9bY=; b=gJow3HkVN7slBAjFUXQNAeeK++tfN9JAxgi6ZaH3UTG7iusG/zb89mNc7ytypEg2BKqymy qykHiltav/YQvgQLzocLFuoxP2cAYYg4kLzxNN6Ea4Kreoe7GKaiW6mj+/P8MYH1st5MoN mK7hYtLz6U38BNqChUj1+Vn43JprneU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-457-Q_18dS5aNSOwn9LtoRQKqQ-1; Fri, 19 May 2023 19:46:18 -0400 X-MC-Unique: Q_18dS5aNSOwn9LtoRQKqQ-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A8188185A78F; Fri, 19 May 2023 23:46:17 +0000 (UTC) Received: from localhost (ovpn-12-79.pek2.redhat.com [10.72.12.79]) by smtp.corp.redhat.com (Postfix) with ESMTPS id DA9532166B25; Fri, 19 May 2023 23:46:16 +0000 (UTC) Date: Sat, 20 May 2023 07:46:12 +0800 From: Baoquan He To: Thomas Gleixner Cc: "Russell King (Oracle)" , Andrew Morton , linux-mm@kvack.org, Christoph Hellwig , Uladzislau Rezki , Lorenzo Stoakes , Peter Zijlstra , John Ogness , linux-arm-kernel@lists.infradead.org, Mark Rutland , Marc Zyngier , x86@kernel.org Subject: Re: [RFC PATCH 3/3] mm/vmalloc.c: change _vm_unmap_aliases() to do purge firstly Message-ID: References: <87r0rg93z5.ffs@tglx> <87ilcs8zab.ffs@tglx> <87fs7w8z6y.ffs@tglx> <874joc8x7d.ffs@tglx> <87r0rg73wp.ffs@tglx> <87edng6qu8.ffs@tglx> <87cz2w415t.ffs@tglx> MIME-Version: 1.0 In-Reply-To: <87cz2w415t.ffs@tglx> X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 604E7180007 X-Stat-Signature: yexts3sr3759bxwp1egieq9i418qj6fy X-HE-Tag: 1684539982-961310 X-HE-Meta: U2FsdGVkX1+rhog98azER/E+r4j6XZswglFuaeWec876Uu1mEQ8FGuQ/E3xMG9q1/hnSH9a/oHErGgbEkgz0RH62q22Z0ll35FIxSrA3F04zRsqM5Jld0PHQmOTNkgrH5XGpZb+qcd55L7eTPwrKTO3Im5JWprYV8esuVt5FB6JZV4se6IIcgm7mW+dnvn72vLofiltWGqCMsR9Y52BQQuXHeBwc5OTwTJavg1gMeioDkDGeKzS9qd6vJyTK17dm4IUXE/1R6oH99n3A7U16UYddNo3xBKXxjZBU0JWgrWXD+ZTI5Z3Zln/mbcVzHBp+IaUcBrnUp22bpraDAycVLnV0UZHP624rU/QQYqz6SaFtwXa1FyFjL+lFON5ZfLVQSd8GOBoICrKFdteOt9g3uHPhLT0Xu8fniDdXzL8X3989mrJz6B1DY31J3rb+3G5YKldhA2V0b7s+cBKbMESWwkXGB3B6L3/DHHlQ9SRZmrOKQK5aSiuE2y/ERwRQPNeniDUnycnQzW6gL/sNXXqbPPWIT7NLDxOhs0sTLIcG4/30cLWu6sNYWi6iRf2asByPDRxnKo97Wo6k8xF26NJfMolpufyt8kjxzrQh+gUy/TcCOaaC3VFkaE6VOmBd+L/jKfQeTvYyGh2BoEGEAo1cOroJ08VWSylwwqSpE+xzQsBNO4+GemwhBIA3RRrwQ5OzlqlnomBZLFcPdXK/YCraNDu7lFWn0XMT/xsgGZbbCVKruQzVK45NucX5RstAcFGmX6R0MHKKVMvp71e6fbUUb3Y7wibcQ/MZQlSSGlY50FQwQ/ypyAFhIRv22lhQn51vucMkVOhytRh+6zozqurepHmm0lwmrq0u2bSkIRShmNruUPpltUQVb1b78DcOXjBLL0uiX7erwxElV9fmAZOxauyrt7mTmDVcWQwiRfdkHyQ2C8nuuY4lyyRrYsMEr+i+85ZdtB67M+qUt88y4wh Pp5qE+yf zUhS2qpbggRZwaKjL+AabN42QmjbmMprg9rPGkSFWRGdeQxfOiKDmHK98TrRSiL9KE+7jFMeb0riT3ze2/2QyPd4qM9+CZEgemgRw7QCGOivvn9w+3LlpILeSKHEGKvkDRBM+6gu6eKh0teYZ6HN73AilpxSO7xVkOwVt9TmpVzvFn+MvE7FLqxEgjkSwWjt99BW254+SPzdhw+yy75Ft3OaMq8/sVttUX7+zLAYKhE5C5ZH/agrUZmd8l20kSxvFjBvVNLmMzYLA1gcaIVuQyul9aA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 05/19/23 at 08:38pm, Thomas Gleixner wrote: > On Fri, May 19 2023 at 20:03, Baoquan He wrote: > > After vb_free() invocation, the va will be purged and put into purge > > tree/list if the entire vmap_block is dirty. If not entirely dirty, the > > vmap_block is still in percpu vmap_block_queue list, just like below two > > graphs: > > > > (1) > > |-----|------------|-----------|-------| > > |dirty|still mapped| dirty | free | > > > > 2) > > |------------------------------|-------| > > | dirty | free | > > > > In the current _vm_unmap_aliases(), to reclaim those unmapped range and > > flush, it will iterate percpu vbq to calculate the range from vmap_block > > like above two cases. Then call purge_fragmented_blocks_allcpus() > > to purge the vmap_block in case 2 since no mapping exists right now, > > and put these purged vmap_block va into purge tree/list. Then in > > __purge_vmap_area_lazy(), it will continue calculating the flush range > > from purge list. Obviously, this will take vmap_block va in the 2nd case > > into account twice. > > Which made me look deeper into purge_fragmented_blocks() > > list_for_each_entry_rcu(vb, &vbq->free, free_list) { > > if (!(vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS)) > continue; > > spin_lock(&vb->lock); > if (vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS) { > > That means if an allocation does not find something in the free list > then this can happen: > > vaddr = vb_alloc(size) > vaddr = new_vmap_block(order, gfp_mask); > > vb_free(vaddr, size) > vb->dirty = 1ULL << order; > > purge_fragmented_blocks() > purge(most_recently_allocated_block); > > vaddr = vb_alloc(size) > vaddr = new_vmap_block(order, gfp_mask); > > How does that make sense? > > That block would have hundreds of pages left and is the most recently > allocated. So the next vb_alloc() has to reallocate one instead of using > the one which was allocated just before. > > This clearly lacks a free space check so that blocks which have more > free space than a certain threshold are not thrown away prematurely. > Maybe it wants an age check too, so that blocks which are unused for a > long time can be recycled, but that's an orthogonal issue. You are right, the vmap_block alloc/free does have the issue you pointed out here. What I can defend is that it should be fine if VM_FLUSH_RESET_PERMS memory doesn't upset the situation. As we see, the lazy flush will only be triggered when lazy_max_pages() is met, or alloc_vmap_area() can't find an available range. If these two happens, means we really need to flush and reclaim the unmapped area into free list/tree since the vmalloc address space has run out. Even though the vmap_block has mach free space left, still need be purged to cope with an emergency. So, if we pick VM_FLUSH_RESET_PERMS memory out and flush it alone, and set a threshold for vmap_block purging, is it better? > > > That aside your patch does still not address what I pointed out to you > and what my patch cures: > > pages bits dirtymin dirtymax > vb_alloc(A) 255 0 - 254 VMAP_BBMAP_BITS 0 > vb_alloc(B) 255 255 - 509 VMAP_BBMAP_BITS 0 > vb_alloc(C) 255 510 - 764 VMAP_BBMAP_BITS 0 > vb_alloc(D) 255 765 - 1020 VMAP_BBMAP_BITS 0 > > The block stays on the free list because there are still 4 pages left > and it stays there until either _all_ free space is used or _all_ > allocated space is freed. > > Now the first allocation gets freed: > > vb_free(A) 255 0 - 254 0 254 > > From there on _every_ invocation of __purge_vmap_area_lazy() will see > this range as long as the block is on the free list: > > list_for_each_entry_rcu(vb, &vbq->free, free_list) { > spin_lock(&vb->lock); > if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { > > because this condition is true. So this flushes the same range over and > over, no? > > This flush range gets larger over time the more allocations are freed up > to the point where the block vanishes from the free list. > > By resetting vb->dirty_min/max the freed range is only flushed _once_, > no? The resulting flush range might still be excessively large as I > pointed out before: > > 1) Flush after freeing A > > vb_free(A) 2 0 - 1 0 1 > Flush VMAP_BBMAP_BITS 0 <- correct > vb_free(C) 2 6 - 7 6 7 > Flush VMAP_BBMAP_BITS 0 <- correct > > 2) No flush between freeing A and C > > vb_free(A) 2 0 - 1 0 1 > vb_free(C) 2 6 - 7 0 7 > Flush VMAP_BBMAP_BITS 0 <- overbroad flush > > 3) No flush between freeing A, C, B > > vb_free(A) 2 0 - 1 0 1 > vb_free(B) 2 6 - 7 0 7 > vb_free(C) 2 2 - 5 0 7 > Flush VMAP_BBMAP_BITS 0 <- correct > > Obviously case 2 could be > > vb_free(A) 2 0 - 1 0 1 > vb_free(X) 2 1000 - 1001 1000 1001 > > So that flush via purge_vmap_area_list() will ask to flush 1002 pages > instead of 4, right? Again, that does not make sense. Yes, I got your point now. I didn't read your cure code carefully, sorry for that. > > The other issue I pointed out: > > Assume the block has (for simplicity) 255 allocations size of 4 pages, > again free space of 4 pages. > > 254 allocations are freed, which means there is one remaining > mapped. All 254 freed are flushed via __purge_vmap_area_lazy() over > time. > > Now the last allocation is freed and the block is moved to the > purge_vmap_area_list, which then does a full flush of the complete area, > i.e. 4MB in that case, while in fact it only needs to flush 2 pages. It's easy to fix. For vmap_block, I have marked it in va->flag with VMAP_RAM|VMAP_BLOCK. When flushing va in purge list, we can skip vmap_block va. I don't know how you will tackle the per va flush Nadav pointed out, so I will not give a dtaft code. > > > Also these intermediate flushes are inconsistent versus how fully > utilized blocks are handled: > > vb_alloc() > if (vb->free == 0) > list_del(vb->free_list); > > So all allocations which are freed after that point stay unflushed until > the last allocation is freed which moves the block to the > purge_vmap_area_list, where it gets a full VA range flush. That may be risky if stay unflushed until the last allocation is freed. We use vm_map_ram() interface to map passed in pages into vmalloc area. If vb_free() is called, the sub-region has been unmapped and user maybe have released the pages. user of vm_unmap_aliases() may be impacted if we don't flush those area freed with vb_free(). In reality, those areas have been unmapped, while there's still TLB existing. Not very sure about that. If we can hold the vmap_block flush until purging it w/o risk, it will save us many troubles. > > IOW, for blocks on the free list this cares about unflushed mappings of > freed spaces, but for fully utilized blocks with freed spaces it does > obviously not matter, right? Yes, while depends on how we flush them. Flush them each time if there's dirty, or hold the flush until purged if holding is allowed. > > So either we care about flushing the mappings of freed spaces or we do > not, but caring in one case and ignoring it in the other case is > inconsistent at best. >