linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: Vlastimil Babka <vbabka@suse.cz>,
	syzbot <syzbot+50e25cfa4f917d41749f@syzkaller.appspotmail.com>,
	bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com,
	linux-kernel@vger.kernel.org, mingo@redhat.com,
	syzkaller-bugs@googlegroups.com, x86@kernel.org,
	linux-mm@kvack.org,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Tejun Heo <tj@kernel.org>, Lai Jiangshan <jiangshanlai@gmail.com>
Subject: Re: [syzbot] [kernel?] WARNING in flush_cpu_slab
Date: Fri, 24 May 2024 00:32:45 +0200	[thread overview]
Message-ID: <87r0dsdur6.ffs@tglx> (raw)
In-Reply-To: <2149ee23-5321-4422-808f-e6a9046662fc@suse.cz>

On Thu, May 23 2024 at 23:03, Vlastimil Babka wrote:
> On 5/23/24 12:36 PM, Thomas Gleixner wrote:
>>> ------------[ cut here ]------------
>>> DEBUG_LOCKS_WARN_ON(l->owner)
>>> WARNING: CPU: 3 PID: 5221 at include/linux/local_lock_internal.h:30 local_lock_acquire include/linux/local_lock_internal.h:30 [inline]
>>> WARNING: CPU: 3 PID: 5221 at include/linux/local_lock_internal.h:30 flush_slab mm/slub.c:3088 [inline]
>>> WARNING: CPU: 3 PID: 5221 at include/linux/local_lock_internal.h:30 flush_cpu_slab+0x37f/0x410 mm/slub.c:3146
>
> I'm puzzled by this. We use local_lock_irqsave() on !PREEMPT_RT everywhere.
> IIUC this warning says we did the irqsave() and then found out somebody else
> already set the owner? But that means they also did that irqsave() and set
> themselves as l->owner. Does that mey there would be a spurious irq enable
> that didn't go through local_unlock_irqrestore()?
>
> Also this particular stack is from the work, which is scheduled by
> queue_work_on() in flush_all_cpus_locked(), which also has a
> lockdep_assert_cpus_held() so it should fullfill the "the caller must ensure
> the cpu doesn't go away" property. But I think even if this ended up on the
> wrong cpu (for the full duration or migrated while processing the work item)
> somehow, it wouldn't be able to cause such warning, but rather corrupt
> something else

Indeed. There is another report which makes no sense either:

 https://lore.kernel.org/lkml/000000000000fa09d906191c3ee5@google.com

Both look like data corropution issues caused by whatever...

Thanks,

        tglx


       reply	other threads:[~2024-05-23 22:32 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <0000000000008c9d27061915ca9c@google.com>
     [not found] ` <87v834g6hn.ffs@tglx>
     [not found]   ` <2149ee23-5321-4422-808f-e6a9046662fc@suse.cz>
2024-05-23 22:32     ` Thomas Gleixner [this message]
2024-05-24  8:02       ` Vlastimil Babka
2024-05-24  6:43     ` Sebastian Andrzej Siewior

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87r0dsdur6.ffs@tglx \
    --to=tglx@linutronix.de \
    --cc=bigeasy@linutronix.de \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=jiangshanlai@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mingo@redhat.com \
    --cc=syzbot+50e25cfa4f917d41749f@syzkaller.appspotmail.com \
    --cc=syzkaller-bugs@googlegroups.com \
    --cc=tj@kernel.org \
    --cc=vbabka@suse.cz \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox