linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Baoquan He <bhe@redhat.com>
To: Zhaoyang Huang <huangzhaoyang@gmail.com>
Cc: "zhaoyang.huang" <zhaoyang.huang@unisoc.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Uladzislau Rezki <urezki@gmail.com>,
	Christoph Hellwig <hch@infradead.org>,
	Lorenzo Stoakes <lstoakes@gmail.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	steve.kang@unisoc.com, Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH] mm: fix incorrect vbq reference in purge_fragmented_block
Date: Thu, 30 May 2024 15:54:47 +0800	[thread overview]
Message-ID: <ZlgwxwN3k5vQVVvH@MiWiFi-R3L-srv> (raw)
In-Reply-To: <CAGWkznE=akrSBEQyq+f6tDN6fJ_J59WhJ-bvxpfrLUgTJ73h4g@mail.gmail.com>

On 05/30/24 at 03:35pm, Zhaoyang Huang wrote:
> On Thu, May 30, 2024 at 3:19 PM Baoquan He <bhe@redhat.com> wrote:
> >
> > On 05/30/24 at 10:51am, zhaoyang.huang wrote:
> > > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
> > >
> > > Broken vbq->free reported on a v6.6 based system which is caused
> > > by invalid vbq->lock protect over vbq->free in purge_fragmented_block.
> > > This should be introduced by the Fixes below which ignored vbq->lock
> > > matter.
> >
> > It will be helpful to provide more details, what's the symptom of the
> > brekage, and in which case vbq->free is broken.
> Vmalloc area runs out in our ARM64 system during an erofs test as
> vm_map_ram failed[1]. We find that one vbq->free->next point to
> vbq->free which makes list_for_each_entry_rcu can not iterate the list
> and find the BUG.

Thanks for these information which are very helpful and important.
They need be put in log for easier understanding. I am wondering
about the vbq->free list breakage by the run out vmalloc area, could
you say more about how it's caused? And do you think we need fix that
vbq->free list breakage either?

> 
> [1]
> PID: 1        TASK: ffffff80802b4e00  CPU: 6    COMMAND: "init"
>  #0 [ffffffc08006afe0] __switch_to at ffffffc08111d5cc
>  #1 [ffffffc08006b040] __schedule at ffffffc08111dde0
>  #2 [ffffffc08006b0a0] schedule at ffffffc08111e294
>  #3 [ffffffc08006b0d0] schedule_preempt_disabled at ffffffc08111e3f0
>  #4 [ffffffc08006b140] __mutex_lock at ffffffc08112068c
>  #5 [ffffffc08006b180] __mutex_lock_slowpath at ffffffc08111f8f8
>  #6 [ffffffc08006b1a0] mutex_lock at ffffffc08111f834
>  #7 [ffffffc08006b1d0] reclaim_and_purge_vmap_areas at ffffffc0803ebc3c
>  #8 [ffffffc08006b290] alloc_vmap_area at ffffffc0803e83fc
>  #9 [ffffffc08006b300] vm_map_ram at ffffffc0803e78c0
> #10 [ffffffc08006b420] z_erofs_lz4_decompress at ffffffc0806a49b0
> #11 [ffffffc08006b670] z_erofs_decompress_queue at ffffffc0806a8fd0
> #12 [ffffffc08006b860] z_erofs_runqueue at ffffffc0806a8744
> #13 [ffffffc08006b970] z_erofs_readahead at ffffffc0806a6cfc
> #14 [ffffffc08006ba00] read_pages at ffffffc08037ed78
> #15 [ffffffc08006ba70] page_cache_ra_unbounded at ffffffc08037eb58
> #16 [ffffffc08006bb00] page_cache_ra_order at ffffffc08037f42c
> #17 [ffffffc08006bbb0] do_sync_mmap_readahead at ffffffc080371d3c
> #18 [ffffffc08006bc40] filemap_fault at ffffffc080371774
> #19 [ffffffc08006bd60] handle_mm_fault at ffffffc0803cc118
> #20 [ffffffc08006bdc0] do_page_fault at ffffffc08112a618
> #21 [ffffffc08006be20] do_translation_fault at ffffffc08112a36c
> #22 [ffffffc08006be30] do_mem_abort at ffffffc0800bfbf0
> #23 [ffffffc08006be70] el0_ia at ffffffc08111583c
> #24 [ffffffc08006bea0] el0t_64_sync_handler at ffffffc0811156a4
> #25 [ffffffc08006bfe0] el0t_64_sync at ffffffc080091584
> 
> 
> >
> > >
> > > Fixes: fc1e0d980037 ("mm/vmalloc: prevent stale TLBs in fully utilized blocks")
> > >
> > > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
> > > ---
> > >  mm/vmalloc.c | 11 +++++++----
> > >  1 file changed, 7 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > index 22aa63f4ef63..112b50431725 100644
> > > --- a/mm/vmalloc.c
> > > +++ b/mm/vmalloc.c
> > > @@ -2614,9 +2614,10 @@ static void free_vmap_block(struct vmap_block *vb)
> > >  }
> > >
> > >  static bool purge_fragmented_block(struct vmap_block *vb,
> > > -             struct vmap_block_queue *vbq, struct list_head *purge_list,
> > > -             bool force_purge)
> > > +             struct list_head *purge_list, bool force_purge)
> > >  {
> > > +     struct vmap_block_queue *vbq;
> > > +
> > >       if (vb->free + vb->dirty != VMAP_BBMAP_BITS ||
> > >           vb->dirty == VMAP_BBMAP_BITS)
> > >               return false;
> > > @@ -2625,6 +2626,8 @@ static bool purge_fragmented_block(struct vmap_block *vb,
> > >       if (!(force_purge || vb->free < VMAP_PURGE_THRESHOLD))
> > >               return false;
> > >
> > > +     vbq = container_of(addr_to_vb_xa(vb->va->va_start),
> > > +             struct vmap_block_queue, vmap_blocks);
> > >       /* prevent further allocs after releasing lock */
> > >       WRITE_ONCE(vb->free, 0);
> > >       /* prevent purging it again */
> > > @@ -2664,7 +2667,7 @@ static void purge_fragmented_blocks(int cpu)
> > >                       continue;
> > >
> > >               spin_lock(&vb->lock);
> > > -             purge_fragmented_block(vb, vbq, &purge, true);
> > > +             purge_fragmented_block(vb, &purge, true);
> > >               spin_unlock(&vb->lock);
> > >       }
> > >       rcu_read_unlock();
> > > @@ -2801,7 +2804,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush)
> > >                        * not purgeable, check whether there is dirty
> > >                        * space to be flushed.
> > >                        */
> > > -                     if (!purge_fragmented_block(vb, vbq, &purge_list, false) &&
> > > +                     if (!purge_fragmented_block(vb, &purge_list, false) &&
> > >                           vb->dirty_max && vb->dirty != VMAP_BBMAP_BITS) {
> > >                               unsigned long va_start = vb->va->va_start;
> > >                               unsigned long s, e;
> > > --
> > > 2.25.1
> > >
> > >
> >
> 



  reply	other threads:[~2024-05-30  7:55 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-30  2:51 zhaoyang.huang
2024-05-30  2:56 ` Zhaoyang Huang
2024-05-30  7:18 ` Baoquan He
2024-05-30  7:35   ` Zhaoyang Huang
2024-05-30  7:54     ` Baoquan He [this message]
2024-05-30  8:18       ` Zhaoyang Huang
2024-05-30  9:16 ` Chuanhua Han
2024-05-30  9:25   ` Zhaoyang Huang
2024-05-30  9:46     ` Chuanhua Han

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZlgwxwN3k5vQVVvH@MiWiFi-R3L-srv \
    --to=bhe@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=hch@infradead.org \
    --cc=huangzhaoyang@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lstoakes@gmail.com \
    --cc=steve.kang@unisoc.com \
    --cc=tglx@linutronix.de \
    --cc=urezki@gmail.com \
    --cc=zhaoyang.huang@unisoc.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox