From: Thomas Gleixner <tglx@linutronix.de>
To: linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
Christoph Hellwig <hch@lst.de>,
Uladzislau Rezki <urezki@gmail.com>,
Lorenzo Stoakes <lstoakes@gmail.com>,
Peter Zijlstra <peterz@infradead.org>,
Baoquan He <bhe@redhat.com>
Subject: [patch 2/6] mm/vmalloc: Avoid iterating over per CPU vmap blocks twice
Date: Tue, 23 May 2023 16:02:12 +0200 (CEST) [thread overview]
Message-ID: <20230523140002.634591885@linutronix.de> (raw)
In-Reply-To: <20230523135902.517032811@linutronix.de>
_vunmap_aliases() walks the per CPU xarrays to find partially unmapped
blocks and then walks the per cpu free lists to purge fragmented blocks.
Arguably that's waste of CPU cycles and cache lines as the full xarray walk
already touches every block.
Avoid this double iteration:
- Split out the code to purge one block and the code to free the local
purge list into helper functions.
- Try to purge the fragmented blocks in the xarray walk before looking at
their dirty space.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
mm/vmalloc.c | 66 ++++++++++++++++++++++++++++++++++++++---------------------
1 file changed, 43 insertions(+), 23 deletions(-)
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2086,39 +2086,52 @@ static void free_vmap_block(struct vmap_
kfree_rcu(vb, rcu_head);
}
+static bool purge_fragmented_block(struct vmap_block *vb, struct vmap_block_queue *vbq,
+ struct list_head *purge_list)
+{
+ if (!(vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS))
+ return false;
+
+ /* prevent further allocs after releasing lock */
+ vb->free = 0;
+ /* prevent purging it again */
+ vb->dirty = VMAP_BBMAP_BITS;
+ vb->dirty_min = 0;
+ vb->dirty_max = VMAP_BBMAP_BITS;
+ spin_lock(&vbq->lock);
+ list_del_rcu(&vb->free_list);
+ spin_unlock(&vbq->lock);
+ list_add_tail(&vb->purge, purge_list);
+ return true;
+}
+
+static void free_purged_blocks(struct list_head *purge_list)
+{
+ struct vmap_block *vb, *n_vb;
+
+ list_for_each_entry_safe(vb, n_vb, purge_list, purge) {
+ list_del(&vb->purge);
+ free_vmap_block(vb);
+ }
+}
+
static void purge_fragmented_blocks(int cpu)
{
LIST_HEAD(purge);
struct vmap_block *vb;
- struct vmap_block *n_vb;
struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu);
rcu_read_lock();
list_for_each_entry_rcu(vb, &vbq->free, free_list) {
-
if (!(vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS))
continue;
spin_lock(&vb->lock);
- if (vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS) {
- vb->free = 0; /* prevent further allocs after releasing lock */
- vb->dirty = VMAP_BBMAP_BITS; /* prevent purging it again */
- vb->dirty_min = 0;
- vb->dirty_max = VMAP_BBMAP_BITS;
- spin_lock(&vbq->lock);
- list_del_rcu(&vb->free_list);
- spin_unlock(&vbq->lock);
- spin_unlock(&vb->lock);
- list_add_tail(&vb->purge, &purge);
- } else
- spin_unlock(&vb->lock);
+ purge_fragmented_block(vb, vbq, &purge);
+ spin_unlock(&vb->lock);
}
rcu_read_unlock();
-
- list_for_each_entry_safe(vb, n_vb, &purge, purge) {
- list_del(&vb->purge);
- free_vmap_block(vb);
- }
+ free_purged_blocks(&purge);
}
static void purge_fragmented_blocks_allcpus(void)
@@ -2226,12 +2239,13 @@ static void vb_free(unsigned long addr,
static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush)
{
+ LIST_HEAD(purge_list);
int cpu;
if (unlikely(!vmap_initialized))
return;
- might_sleep();
+ mutex_lock(&vmap_purge_lock);
for_each_possible_cpu(cpu) {
struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu);
@@ -2241,7 +2255,14 @@ static void _vm_unmap_aliases(unsigned l
rcu_read_lock();
xa_for_each(&vbq->vmap_blocks, idx, vb) {
spin_lock(&vb->lock);
- if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) {
+
+ /*
+ * Try to purge a fragmented block first. If it's
+ * not purgeable, check whether there is dirty
+ * space to be flushed.
+ */
+ if (!purge_fragmented_block(vb, vbq, &purge_list) &&
+ vb->dirty && vb->dirty != VMAP_BBMAP_BITS) {
unsigned long va_start = vb->va->va_start;
unsigned long s, e;
@@ -2257,9 +2278,8 @@ static void _vm_unmap_aliases(unsigned l
}
rcu_read_unlock();
}
+ free_purged_blocks(&purge_list);
- mutex_lock(&vmap_purge_lock);
- purge_fragmented_blocks_allcpus();
if (!__purge_vmap_area_lazy(start, end) && flush)
flush_tlb_kernel_range(start, end);
mutex_unlock(&vmap_purge_lock);
next prev parent reply other threads:[~2023-05-23 14:08 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-23 14:02 [patch 0/6] mm/vmalloc: Assorted fixes and improvements Thomas Gleixner
2023-05-23 14:02 ` [patch 1/6] mm/vmalloc: Prevent stale TLBs in fully utilized blocks Thomas Gleixner
2023-05-23 15:17 ` Christoph Hellwig
2023-05-23 16:40 ` Thomas Gleixner
2023-05-23 16:47 ` Uladzislau Rezki
2023-05-23 19:18 ` Lorenzo Stoakes
2023-05-24 9:19 ` Uladzislau Rezki
2023-05-24 9:25 ` Baoquan He
2023-05-24 9:51 ` Thomas Gleixner
2023-05-24 11:24 ` Baoquan He
2023-05-24 11:26 ` Baoquan He
2023-05-24 11:36 ` Uladzislau Rezki
2023-05-24 12:49 ` Thomas Gleixner
2023-05-24 12:44 ` Thomas Gleixner
2023-05-24 13:41 ` Baoquan He
2023-05-24 14:31 ` Thomas Gleixner
2023-05-24 9:32 ` Baoquan He
2023-05-24 9:52 ` Thomas Gleixner
2023-05-24 14:10 ` Baoquan He
2023-05-24 14:35 ` Thomas Gleixner
2023-05-23 14:02 ` Thomas Gleixner [this message]
2023-05-23 15:21 ` [patch 2/6] mm/vmalloc: Avoid iterating over per CPU vmap blocks twice Christoph Hellwig
2023-05-23 14:02 ` [patch 3/6] mm/vmalloc: Prevent flushing dirty space over and over Thomas Gleixner
2023-05-23 15:27 ` Christoph Hellwig
2023-05-23 16:10 ` Thomas Gleixner
2023-05-24 9:43 ` Baoquan He
2023-05-23 14:02 ` [patch 4/6] mm/vmalloc: Check free space in vmap_block lockless Thomas Gleixner
2023-05-23 15:29 ` Christoph Hellwig
2023-05-23 16:17 ` Thomas Gleixner
2023-05-24 9:20 ` Uladzislau Rezki
2023-05-23 14:02 ` [patch 5/6] mm/vmalloc: Add missing READ/WRITE_ONCE() annotations Thomas Gleixner
2023-05-24 9:15 ` Uladzislau Rezki
2023-05-23 14:02 ` [patch 6/6] mm/vmalloc: Dont purge usable blocks unnecessarily Thomas Gleixner
2023-05-23 15:30 ` Christoph Hellwig
2023-05-24 10:34 ` Baoquan He
2023-05-24 12:55 ` Thomas Gleixner
2023-05-23 16:24 ` [patch 0/6] mm/vmalloc: Assorted fixes and improvements Uladzislau Rezki
2023-05-23 17:33 ` Thomas Gleixner
2023-05-23 17:39 ` Thomas Gleixner
2023-05-23 17:48 ` Uladzislau Rezki
2023-05-23 17:51 ` Uladzislau Rezki
2023-05-23 17:55 ` Uladzislau Rezki
2023-05-23 18:40 ` Thomas Gleixner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230523140002.634591885@linutronix.de \
--to=tglx@linutronix.de \
--cc=akpm@linux-foundation.org \
--cc=bhe@redhat.com \
--cc=hch@lst.de \
--cc=linux-mm@kvack.org \
--cc=lstoakes@gmail.com \
--cc=peterz@infradead.org \
--cc=urezki@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox