linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Baoquan He <bhe@redhat.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: "Russell King (Oracle)" <linux@armlinux.org.uk>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, Christoph Hellwig <hch@lst.de>,
	Uladzislau Rezki <urezki@gmail.com>,
	Lorenzo Stoakes <lstoakes@gmail.com>,
	Peter Zijlstra <peterz@infradead.org>,
	John Ogness <jogness@linutronix.de>,
	linux-arm-kernel@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,
	Marc Zyngier <maz@kernel.org>,
	x86@kernel.org
Subject: Re: [RFC PATCH 3/3] mm/vmalloc.c: change _vm_unmap_aliases() to do purge firstly
Date: Sat, 20 May 2023 07:46:12 +0800	[thread overview]
Message-ID: <ZGgKRGaIChzdnUqB@MiWiFi-R3L-srv> (raw)
In-Reply-To: <87cz2w415t.ffs@tglx>

On 05/19/23 at 08:38pm, Thomas Gleixner wrote:
> On Fri, May 19 2023 at 20:03, Baoquan He wrote:
> > After vb_free() invocation, the va will be purged and put into purge
> > tree/list if the entire vmap_block is dirty. If not entirely dirty, the
> > vmap_block is still in percpu vmap_block_queue list, just like below two
> > graphs:
> >
> > (1)
> >   |-----|------------|-----------|-------|
> >   |dirty|still mapped|   dirty   | free  |
> >
> > 2)
> >   |------------------------------|-------|
> >   |         dirty                | free  |
> >
> > In the current _vm_unmap_aliases(), to reclaim those unmapped range and
> > flush, it will iterate percpu vbq to calculate the range from vmap_block
> > like above two cases. Then call purge_fragmented_blocks_allcpus()
> > to purge the vmap_block in case 2 since no mapping exists right now,
> > and put these purged vmap_block va into purge tree/list. Then in
> > __purge_vmap_area_lazy(), it will continue calculating the flush range
> > from purge list. Obviously, this will take vmap_block va in the 2nd case
> > into account twice.
> 
> Which made me look deeper into purge_fragmented_blocks()
> 
> 	list_for_each_entry_rcu(vb, &vbq->free, free_list) {
> 
> 		if (!(vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS))
> 			continue;
> 
>                 spin_lock(&vb->lock);
> 		if (vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS) {
> 
> That means if an allocation does not find something in the free list
> then this can happen:
> 
> vaddr = vb_alloc(size)
>      vaddr = new_vmap_block(order, gfp_mask);
> 
> vb_free(vaddr, size)
>      vb->dirty = 1ULL << order;
> 
> purge_fragmented_blocks()
>       purge(most_recently_allocated_block);
> 
> vaddr = vb_alloc(size)
>      vaddr = new_vmap_block(order, gfp_mask);
> 
> How does that make sense?
> 
> That block would have hundreds of pages left and is the most recently
> allocated. So the next vb_alloc() has to reallocate one instead of using
> the one which was allocated just before.
> 
> This clearly lacks a free space check so that blocks which have more
> free space than a certain threshold are not thrown away prematurely.
> Maybe it wants an age check too, so that blocks which are unused for a
> long time can be recycled, but that's an orthogonal issue.

You are right, the vmap_block alloc/free does have the issue you pointed
out here. What I can defend is that it should be fine if
VM_FLUSH_RESET_PERMS memory doesn't upset the situation. As we see, the
lazy flush will only be triggered when lazy_max_pages() is met, or
alloc_vmap_area() can't find an available range. If these two happens,
means we really need to flush and reclaim the unmapped area into free
list/tree since the vmalloc address space has run out. Even though the
vmap_block has mach free space left, still need be purged to cope with
an emergency.

So, if we pick VM_FLUSH_RESET_PERMS memory out and flush it alone, and
set a threshold for vmap_block purging, is it better?

> 
> 
> That aside your patch does still not address what I pointed out to you
> and what my patch cures:
> 
> 		 pages     bits    dirtymin            dirtymax
>    vb_alloc(A)   255     0 -  254  VMAP_BBMAP_BITS     0
>    vb_alloc(B)   255   255 -  509  VMAP_BBMAP_BITS     0
>    vb_alloc(C)   255   510 -  764  VMAP_BBMAP_BITS     0
>    vb_alloc(D)   255   765 - 1020  VMAP_BBMAP_BITS     0
> 
> The block stays on the free list because there are still 4 pages left
> and it stays there until either _all_ free space is used or _all_
> allocated space is freed.
> 
> Now the first allocation gets freed:
> 
>    vb_free(A)    255     0 - 254  0                    254
> 
> From there on _every_ invocation of __purge_vmap_area_lazy() will see
> this range as long as the block is on the free list:
> 
>    list_for_each_entry_rcu(vb, &vbq->free, free_list) {
> 	spin_lock(&vb->lock);
> 	if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) {
> 
> because this condition is true. So this flushes the same range over and
> over, no?
> 
> This flush range gets larger over time the more allocations are freed up
> to the point where the block vanishes from the free list.
> 
> By resetting vb->dirty_min/max the freed range is only flushed _once_,
> no? The resulting flush range might still be excessively large as I
> pointed out before:
> 
> 1) Flush after freeing A
> 
>    vb_free(A)    2     0 - 1  0			  1
>    Flush                      VMAP_BBMAP_BITS     0    <- correct
>    vb_free(C)    2     6 - 7  6                   7
>    Flush                      VMAP_BBMAP_BITS     0    <- correct
> 
> 2) No flush between freeing A and C
> 
>    vb_free(A)    2     0 - 1  0			  1
>    vb_free(C)    2     6 - 7  0                   7     
>    Flush                      VMAP_BBMAP_BITS     0     <- overbroad flush
> 
> 3) No flush between freeing A, C, B
> 
>    vb_free(A)    2     0 - 1  0			  1
>    vb_free(B)    2     6 - 7  0                   7
>    vb_free(C)    2     2 - 5  0                   7
>    Flush                      VMAP_BBMAP_BITS     0    <- correct
> 
> Obviously case 2 could be
> 
>    vb_free(A)    2     0 - 1     0		     1
>    vb_free(X)    2  1000 - 1001  1000                1001
> 
>    So that flush via purge_vmap_area_list() will ask to flush 1002 pages
>    instead of 4, right? Again, that does not make sense.

Yes, I got your point now. I didn't read your cure code carefully, sorry
for that.

> 
> The other issue I pointed out:
> 
> Assume the block has (for simplicity) 255 allocations size of 4 pages,
> again free space of 4 pages.
> 
> 254 allocations are freed, which means there is one remaining
> mapped. All 254 freed are flushed via __purge_vmap_area_lazy() over
> time.
> 
> Now the last allocation is freed and the block is moved to the
> purge_vmap_area_list, which then does a full flush of the complete area,
> i.e. 4MB in that case, while in fact it only needs to flush 2 pages.

It's easy to fix. For vmap_block, I have marked it in va->flag with
VMAP_RAM|VMAP_BLOCK. When flushing va in purge list, we can skip
vmap_block va. I don't know how you will tackle the per va flush
Nadav pointed out, so I will not give a dtaft code.

> 
> 
> Also these intermediate flushes are inconsistent versus how fully
> utilized blocks are handled:
> 
> vb_alloc()
>       if (vb->free == 0)
>           list_del(vb->free_list);
> 
> So all allocations which are freed after that point stay unflushed until
> the last allocation is freed which moves the block to the
> purge_vmap_area_list, where it gets a full VA range flush.

That may be risky if stay unflushed until the last allocation is freed.
We use vm_map_ram() interface to map passed in pages into vmalloc area.
If vb_free() is called, the sub-region has been unmapped and user maybe
have released the pages. user of vm_unmap_aliases() may be impacted if
we don't flush those area freed with vb_free(). In reality, those areas
have been unmapped, while there's still TLB existing. Not very sure
about that.

If we can hold the vmap_block flush until purging it w/o risk, it will
save us many troubles.

> 
> IOW, for blocks on the free list this cares about unflushed mappings of
> freed spaces, but for fully utilized blocks with freed spaces it does
> obviously not matter, right?

Yes, while depends on how we flush them. Flush them each time if there's
dirty, or hold the flush until purged if holding is allowed.

> 
> So either we care about flushing the mappings of freed spaces or we do
> not, but caring in one case and ignoring it in the other case is
> inconsistent at best.
> 



  reply	other threads:[~2023-05-19 23:46 UTC|newest]

Thread overview: 75+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-15 16:43 Excessive TLB flush ranges Thomas Gleixner
2023-05-15 16:59 ` Russell King (Oracle)
2023-05-15 19:46   ` Thomas Gleixner
2023-05-15 21:11     ` Thomas Gleixner
2023-05-15 21:31       ` Russell King (Oracle)
2023-05-16  6:37         ` Thomas Gleixner
2023-05-16  6:46           ` Thomas Gleixner
2023-05-16  8:18           ` Thomas Gleixner
2023-05-16  8:20             ` Thomas Gleixner
2023-05-16  8:27               ` Russell King (Oracle)
2023-05-16  9:03                 ` Thomas Gleixner
2023-05-16 10:05                   ` Baoquan He
2023-05-16 14:21                     ` Thomas Gleixner
2023-05-16 19:03                       ` Thomas Gleixner
2023-05-17  9:38                         ` Thomas Gleixner
2023-05-17 10:52                           ` Baoquan He
2023-05-19 11:22                             ` Thomas Gleixner
2023-05-19 11:49                               ` Baoquan He
2023-05-19 14:13                                 ` Thomas Gleixner
2023-05-19 12:01                         ` [RFC PATCH 1/3] mm/vmalloc.c: try to flush vmap_area one by one Baoquan He
2023-05-19 14:16                           ` Thomas Gleixner
2023-05-19 12:02                         ` [RFC PATCH 2/3] mm/vmalloc.c: Only flush VM_FLUSH_RESET_PERMS area immediately Baoquan He
2023-05-19 12:03                         ` [RFC PATCH 3/3] mm/vmalloc.c: change _vm_unmap_aliases() to do purge firstly Baoquan He
2023-05-19 14:17                           ` Thomas Gleixner
2023-05-19 18:38                           ` Thomas Gleixner
2023-05-19 23:46                             ` Baoquan He [this message]
2023-05-21 23:10                               ` Thomas Gleixner
2023-05-22 11:21                                 ` Baoquan He
2023-05-22 12:02                                   ` Thomas Gleixner
2023-05-22 14:34                                     ` Baoquan He
2023-05-22 20:21                                       ` Thomas Gleixner
2023-05-22 20:44                                         ` Thomas Gleixner
2023-05-23  9:35                                         ` Baoquan He
2023-05-19 13:49                   ` Excessive TLB flush ranges Thomas Gleixner
2023-05-16  8:21             ` Russell King (Oracle)
2023-05-16  8:19           ` Russell King (Oracle)
2023-05-16  8:44             ` Thomas Gleixner
2023-05-16  8:48               ` Russell King (Oracle)
2023-05-16 12:09                 ` Thomas Gleixner
2023-05-16 13:42                   ` Uladzislau Rezki
2023-05-16 14:38                     ` Thomas Gleixner
2023-05-16 15:01                       ` Uladzislau Rezki
2023-05-16 17:04                         ` Thomas Gleixner
2023-05-17 11:26                           ` Uladzislau Rezki
2023-05-17 11:58                             ` Thomas Gleixner
2023-05-17 12:15                               ` Uladzislau Rezki
2023-05-17 16:32                                 ` Thomas Gleixner
2023-05-19 10:01                                   ` Uladzislau Rezki
2023-05-19 14:56                                     ` Thomas Gleixner
2023-05-19 15:14                                       ` Uladzislau Rezki
2023-05-19 16:32                                         ` Thomas Gleixner
2023-05-19 17:02                                           ` Uladzislau Rezki
2023-05-16 17:56                       ` Nadav Amit
2023-05-16 19:32                         ` Thomas Gleixner
2023-05-17  0:23                           ` Thomas Gleixner
2023-05-17  1:23                             ` Nadav Amit
2023-05-17 10:31                               ` Thomas Gleixner
2023-05-17 11:47                                 ` Thomas Gleixner
2023-05-17 22:41                                   ` Nadav Amit
2023-05-17 14:43                                 ` Mark Rutland
2023-05-17 16:41                                   ` Thomas Gleixner
2023-05-17 22:57                                 ` Nadav Amit
2023-05-19 11:49                                   ` Thomas Gleixner
2023-05-17 12:12                               ` Russell King (Oracle)
2023-05-17 23:14                                 ` Nadav Amit
2023-05-15 18:17 ` Uladzislau Rezki
2023-05-16  2:26   ` Baoquan He
2023-05-16  6:40     ` Thomas Gleixner
2023-05-16  8:07       ` Baoquan He
2023-05-16  8:10         ` Baoquan He
2023-05-16  8:45         ` Russell King (Oracle)
2023-05-16  9:13           ` Thomas Gleixner
2023-05-16  8:54         ` Thomas Gleixner
2023-05-16  9:48           ` Baoquan He
2023-05-15 20:02 ` Nadav Amit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZGgKRGaIChzdnUqB@MiWiFi-R3L-srv \
    --to=bhe@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=hch@lst.de \
    --cc=jogness@linutronix.de \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=linux@armlinux.org.uk \
    --cc=lstoakes@gmail.com \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=urezki@gmail.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox