From: Miaohe Lin <linmiaohe@huawei.com>
To: <akpm@linux-foundation.org>, <minchan@kernel.org>, <ngupta@vflare.org>
Cc: <senozhatsky@chromium.org>, <henryburns@google.com>,
<linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v2] mm/zsmalloc.c: close race window between zs_pool_dec_isolated() and zs_unregister_migration()
Date: Thu, 8 Jul 2021 20:14:26 +0800 [thread overview]
Message-ID: <7a16cf45-eaed-e7ba-bf47-2382b2c542f2@huawei.com> (raw)
In-Reply-To: <20210708115117.12359-1-linmiaohe@huawei.com>
Sorry for the disturbs! Please ignore this duplicated one...
On 2021/7/8 19:51, Miaohe Lin wrote:
> There has one possible race window between zs_pool_dec_isolated() and
> zs_unregister_migration() because wait_for_isolated_drain() checks the
> isolated count without holding class->lock and there is no order inside
> zs_pool_dec_isolated(). Thus the below race window could be possible:
>
> zs_pool_dec_isolated zs_unregister_migration
> check pool->destroying != 0
> pool->destroying = true;
> smp_mb();
> wait_for_isolated_drain()
> wait for pool->isolated_pages == 0
> atomic_long_dec(&pool->isolated_pages);
> atomic_long_read(&pool->isolated_pages) == 0
>
> Since we observe the pool->destroying (false) before atomic_long_dec()
> for pool->isolated_pages, waking pool->migration_wait up is missed.
>
> Fix this by ensure checking pool->destroying is happened after the
> atomic_long_dec(&pool->isolated_pages).
>
> Fixes: 701d678599d0 ("mm/zsmalloc.c: fix race condition in zs_destroy_pool")
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
> v1->v2:
> Fix potential race window rather than simply combine atomic_long_dec
> and atomic_long_read.
>
> Hi Andrew,
> This patch is the version 2 of
> mm-zsmallocc-combine-two-atomic-ops-in-zs_pool_dec_isolated.patch.
> Many thanks.
> ---
> mm/zsmalloc.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 5f3df680f0a2..0fc388a0202d 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1830,10 +1830,11 @@ static inline void zs_pool_dec_isolated(struct zs_pool *pool)
> VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
> atomic_long_dec(&pool->isolated_pages);
> /*
> - * There's no possibility of racing, since wait_for_isolated_drain()
> - * checks the isolated count under &class->lock after enqueuing
> - * on migration_wait.
> + * Checking pool->destroying must happen after atomic_long_dec()
> + * for pool->isolated_pages above. Paired with the smp_mb() in
> + * zs_unregister_migration().
> */
> + smp_mb__after_atomic();
> if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
> wake_up_all(&pool->migration_wait);
> }
>
next prev parent reply other threads:[~2021-07-08 12:14 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-08 11:51 Miaohe Lin
2021-07-08 12:14 ` Miaohe Lin [this message]
-- strict thread matches above, loose matches on Subject: below --
2021-07-08 11:50 Miaohe Lin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7a16cf45-eaed-e7ba-bf47-2382b2c542f2@huawei.com \
--to=linmiaohe@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=henryburns@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=ngupta@vflare.org \
--cc=senozhatsky@chromium.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox