From: Jonathan Adams <jwadams@google.com>
To: Henry Burns <henryburns@google.com>
Cc: Vitaly Vul <vitaly.vul@sony.com>,
Andrew Morton <akpm@linux-foundation.org>,
Shakeel Butt <shakeelb@google.com>,
David Howells <dhowells@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
Al Viro <viro@zeniv.linux.org.uk>, Linux-MM <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>,
stable@vger.kernel.org
Subject: Re: [PATCH] mm/z3fold.c: Fix z3fold_destroy_pool() ordering
Date: Fri, 26 Jul 2019 16:19:45 -0700 [thread overview]
Message-ID: <CA+VK+GM4AXrmZtv_narEU6pHO+NGrTc74iSSUNNbutZySfXjRw@mail.gmail.com> (raw)
In-Reply-To: <20190726224810.79660-1-henryburns@google.com>
On Fri, Jul 26, 2019 at 3:48 PM Henry Burns <henryburns@google.com> wrote:
>
> The constraint from the zpool use of z3fold_destroy_pool() is there are no
> outstanding handles to memory (so no active allocations), but it is possible
> for there to be outstanding work on either of the two wqs in the pool.
>
> If there is work queued on pool->compact_workqueue when it is called,
> z3fold_destroy_pool() will do:
>
> z3fold_destroy_pool()
> destroy_workqueue(pool->release_wq)
> destroy_workqueue(pool->compact_wq)
> drain_workqueue(pool->compact_wq)
> do_compact_page(zhdr)
> kref_put(&zhdr->refcount)
> __release_z3fold_page(zhdr, ...)
> queue_work_on(pool->release_wq, &pool->work) *BOOM*
>
> So compact_wq needs to be destroyed before release_wq.
>
> Fixes: 5d03a6613957 ("mm/z3fold.c: use kref to prevent page free/compact race")
>
> Signed-off-by: Henry Burns <henryburns@google.com>
Reviewed-by: Jonathan Adams <jwadams@google.com>
> Cc: <stable@vger.kernel.org>
> ---
> mm/z3fold.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index 1a029a7432ee..43de92f52961 100644
> --- a/mm/z3fold.c
> +++ b/mm/z3fold.c
> @@ -818,8 +818,15 @@ static void z3fold_destroy_pool(struct z3fold_pool *pool)
> {
> kmem_cache_destroy(pool->c_handle);
> z3fold_unregister_migration(pool);
> - destroy_workqueue(pool->release_wq);
> +
> + /*
> + * We need to destroy pool->compact_wq before pool->release_wq,
> + * as any pending work on pool->compact_wq will call
> + * queue_work(pool->release_wq, &pool->work).
> + */
> +
> destroy_workqueue(pool->compact_wq);
> + destroy_workqueue(pool->release_wq);
> kfree(pool);
> }
>
> --
> 2.22.0.709.g102302147b-goog
>
next prev parent reply other threads:[~2019-07-26 23:20 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-26 22:48 Henry Burns
2019-07-26 22:48 ` [PATCH] mm/z3fold.c: Fix z3fold_destroy_pool() race condition Henry Burns
2019-07-26 23:07 ` Shakeel Butt
2019-07-26 23:20 ` Jonathan Adams
2019-07-29 18:41 ` Henry Burns
2019-07-26 22:53 ` [PATCH] mm/z3fold.c: Fix z3fold_destroy_pool() ordering Shakeel Butt
2019-07-26 23:19 ` Jonathan Adams [this message]
2019-07-29 18:38 ` Henry Burns
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CA+VK+GM4AXrmZtv_narEU6pHO+NGrTc74iSSUNNbutZySfXjRw@mail.gmail.com \
--to=jwadams@google.com \
--cc=akpm@linux-foundation.org \
--cc=dhowells@redhat.com \
--cc=henryburns@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=shakeelb@google.com \
--cc=stable@vger.kernel.org \
--cc=tglx@linutronix.de \
--cc=viro@zeniv.linux.org.uk \
--cc=vitaly.vul@sony.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox