linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vitaly Wool <vitaly.wool@konsulko.com>
To: Mike Galbraith <efault@gmx.de>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	 LKML <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>,
	 Andrew Morton <akpm@linux-foundation.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	 Thomas Gleixner <tglx@linutronix.de>,
	linux-rt-users@vger.kernel.org
Subject: Re: scheduling while atomic in z3fold
Date: Mon, 7 Dec 2020 12:52:47 +0100	[thread overview]
Message-ID: <CAM4kBBL5+xNWq6DWHY6nQjwDTj8PZKem-rGuFvimi7jekjA+Xw@mail.gmail.com> (raw)
In-Reply-To: <3ffed6172820f2e8e821e1b8817dbd0bdd693c26.camel@gmx.de>

On Mon, Dec 7, 2020 at 3:18 AM Mike Galbraith <efault@gmx.de> wrote:
>
> On Mon, 2020-12-07 at 02:05 +0100, Vitaly Wool wrote:
> >
> > Could you please try the following patch in your setup:
>
> crash> gdb list *z3fold_zpool_free+0x527
> 0xffffffffc0e14487 is in z3fold_zpool_free (mm/z3fold.c:341).
> 336                     if (slots->slot[i]) {
> 337                             is_free = false;
> 338                             break;
> 339                     }
> 340             }
> 341             write_unlock(&slots->lock);  <== boom
> 342
> 343             if (is_free) {
> 344                     struct z3fold_pool *pool = slots_to_pool(slots);
> 345
> crash> z3fold_buddy_slots -x ffff99a3287b8780
> struct z3fold_buddy_slots {
>   slot = {0xdeadbeef, 0xdeadbeef, 0xdeadbeef, 0xdeadbeef},
>   pool = 0xffff99a3146b8400,
>   lock = {
>     rtmutex = {
>       wait_lock = {
>         raw_lock = {
>           {
>             val = {
>               counter = 0x1
>             },
>             {
>               locked = 0x1,
>               pending = 0x0
>             },
>             {
>               locked_pending = 0x1,
>               tail = 0x0
>             }
>           }
>         }
>       },
>       waiters = {
>         rb_root = {
>           rb_node = 0xffff99a3287b8e00
>         },
>         rb_leftmost = 0x0
>       },
>       owner = 0xffff99a355c24500,
>       save_state = 0x1
>     },
>     readers = {
>       counter = 0x80000000
>     }
>   }
> }

Thanks. This trace beats me because I don't quite get how this could
have happened.
Hitting write_unlock at line 341 would mean that HANDLES_ORPHANED bit
is set but obviously it isn't.
Could you please comment out the ".shrink = z3fold_zpool_shrink" line
and retry? Reclaim is the trickiest thing over there since I have to
drop page lock while reclaiming.

Thanks,
   Vitaly

> > diff --git a/mm/z3fold.c b/mm/z3fold.c
> > index 18feaa0bc537..efe9a012643d 100644
> > --- a/mm/z3fold.c
> > +++ b/mm/z3fold.c
> > @@ -544,12 +544,17 @@ static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked)
> >                       break;
> >               }
> >       }
> > -     if (!is_free)
> > +     if (!is_free) {
> >               set_bit(HANDLES_ORPHANED, &zhdr->slots->pool);
> > -     read_unlock(&zhdr->slots->lock);
> > -
> > -     if (is_free)
> > +             read_unlock(&zhdr->slots->lock);
> > +     } else {
> > +             zhdr->slots->slot[0] =
> > +                     zhdr->slots->slot[1] =
> > +                     zhdr->slots->slot[2] =
> > +                     zhdr->slots->slot[3] = 0xdeadbeef;
> > +             read_unlock(&zhdr->slots->lock);
> >               kmem_cache_free(pool->c_handle, zhdr->slots);
> > +     }
> >
> >       if (locked)
> >               z3fold_page_unlock(zhdr);
>


  reply	other threads:[~2020-12-07 11:53 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-28 14:05 Oleksandr Natalenko
2020-11-28 14:09 ` Oleksandr Natalenko
2020-11-28 14:27   ` Oleksandr Natalenko
2020-11-29  6:41     ` Mike Galbraith
     [not found]       ` <79ee43026efe5aaa560953ea8fe29a826ac4e855.camel@gmx.de>
2020-11-29  9:21         ` Mike Galbraith
2020-11-29 10:56           ` Mike Galbraith
2020-11-29 11:29             ` Oleksandr Natalenko
2020-11-29 11:41               ` Mike Galbraith
2020-11-30 13:20                 ` Sebastian Andrzej Siewior
2020-11-30 13:53                   ` Oleksandr Natalenko
2020-11-30 14:28                     ` Sebastian Andrzej Siewior
2020-11-30 14:42                   ` Mike Galbraith
2020-11-30 14:52                     ` Sebastian Andrzej Siewior
2020-11-30 15:01                       ` Mike Galbraith
2020-11-30 15:03                         ` Mike Galbraith
2020-11-30 16:03                         ` Sebastian Andrzej Siewior
2020-11-30 16:27                           ` Mike Galbraith
2020-11-30 16:32                             ` Sebastian Andrzej Siewior
2020-11-30 16:36                               ` Mike Galbraith
2020-11-30 19:09                               ` Mike Galbraith
2020-11-30 16:53                             ` Mike Galbraith
2020-12-02  2:30                           ` Mike Galbraith
     [not found]                             ` <20201202220826.5chy56mbgvrwmg3d@linutronix.de>
2020-12-03  2:16                               ` Mike Galbraith
2020-12-03  8:18                                 ` Mike Galbraith
2020-12-03 13:39                                   ` Sebastian Andrzej Siewior
2020-12-03 14:07                                     ` Vitaly Wool
2020-12-06  9:18                                     ` Mike Galbraith
2020-12-07  1:05                                       ` Vitaly Wool
2020-12-07  2:18                                         ` Mike Galbraith
2020-12-07 11:52                                           ` Vitaly Wool [this message]
2020-12-07 12:34                                             ` Mike Galbraith
2020-12-07 15:21                                               ` Vitaly Wool
2020-12-07 15:41                                                 ` Sebastian Andrzej Siewior
2020-12-07 15:41                                                 ` Mike Galbraith
2020-12-08 23:26                                                   ` Vitaly Wool
2020-12-09  6:13                                                     ` Mike Galbraith
2020-12-09  6:31                                                       ` Mike Galbraith

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAM4kBBL5+xNWq6DWHY6nQjwDTj8PZKem-rGuFvimi7jekjA+Xw@mail.gmail.com \
    --to=vitaly.wool@konsulko.com \
    --cc=akpm@linux-foundation.org \
    --cc=bigeasy@linutronix.de \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rt-users@vger.kernel.org \
    --cc=oleksandr@natalenko.name \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox