From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45B22C433EF for ; Thu, 19 May 2022 07:28:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A72426B0074; Thu, 19 May 2022 03:28:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FAE26B0075; Thu, 19 May 2022 03:28:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 89B646B0078; Thu, 19 May 2022 03:28:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 765706B0074 for ; Thu, 19 May 2022 03:28:30 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 4471960B85 for ; Thu, 19 May 2022 07:28:30 +0000 (UTC) X-FDA: 79481664780.23.3F3DB24 Received: from mail-yw1-f169.google.com (mail-yw1-f169.google.com [209.85.128.169]) by imf16.hostedemail.com (Postfix) with ESMTP id F38351800E7 for ; Thu, 19 May 2022 07:28:17 +0000 (UTC) Received: by mail-yw1-f169.google.com with SMTP id 00721157ae682-2fefb051547so48041497b3.5 for ; Thu, 19 May 2022 00:28:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=konsulko.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=vjrnyjH3grgZKamLmHEN1KL7rfPLrD16boC4fdyQroo=; b=SPYyprvCRKlbgbZYDL2uYcVaMS9qw4163EA7uGOJJlp1o0m71n1NiEVJUGII3dD5fw eEVJwzMvYKJ7jdAVxq8v2MOuubpg7t8BNvH5mfO3PnEVr9mtJ/Q6md8ddPlorm5ei/IZ NTxOVQ5cwG5CwMqWq5MMuSAWKqkIEYW3Xl/OQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=vjrnyjH3grgZKamLmHEN1KL7rfPLrD16boC4fdyQroo=; b=h8SyCqPmfcsZj4IsLRIYmbHqMBIxapg4UlgU5RjwJhGMwfT+ML/jt1wDwW9ByvuT4z DF8HTl9Pc6OTNvEFp5rNxpM4ZzovNWrZQYUM5hBcRLVVoGJ8W5nuOC0e1KEvH5TAauOD 3Vmh7u1k1lEoVWp7PRPjpduSaLpk1cXsocM+S7zuN7V0dESDuEDwcMYUjZ3Kjgn2uyUT 1TdoqBKHKKfMw3T6s4ynVkwU+R2RlLUfydm1to9RYUHLcHGC2f+5gsbsvvg3smhRb1it 7UyRm5GvCb+Y3yI/XxxhX4GfU20mmPMsK0zbAJKnQtcIo4z4WnXNsH5w0UfOMct25Idm 7Iyw== X-Gm-Message-State: AOAM531eojdLWp8b0wIWvkyJOK1mZWqSIylGNwv8XllhbnLpzI00+JHr Oj71AZJYLg7qd1zzvGoE5DN7s5sSZX7gyzRFhkAh7Q== X-Google-Smtp-Source: ABdhPJxs6fgYri+Sc08URKOJAqF6QQWmBnRI7Z26Nami2epmmEqaJ3tPicOT2ZtL5pgm4VePzBYnFZ46qdBkqXMCIEI= X-Received: by 2002:a81:260a:0:b0:2f4:ca82:a42f with SMTP id m10-20020a81260a000000b002f4ca82a42fmr3300277ywm.149.1652945309090; Thu, 19 May 2022 00:28:29 -0700 (PDT) MIME-Version: 1.0 References: <20220429064051.61552-1-linmiaohe@huawei.com> <20220429064051.61552-10-linmiaohe@huawei.com> In-Reply-To: <20220429064051.61552-10-linmiaohe@huawei.com> From: Vitaly Wool Date: Thu, 19 May 2022 09:28:18 +0200 Message-ID: Subject: Re: [PATCH 9/9] mm/z3fold: fix z3fold_page_migrate races with z3fold_map To: Miaohe Lin Cc: Andrew Morton , Linux-MM , LKML Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=konsulko.com header.s=google header.b=SPYyprvC; spf=pass (imf16.hostedemail.com: domain of vitaly.wool@konsulko.com designates 209.85.128.169 as permitted sender) smtp.mailfrom=vitaly.wool@konsulko.com; dmarc=pass (policy=none) header.from=konsulko.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: F38351800E7 X-Stat-Signature: xq73r7ihuunfwpph4ftti4kk3ztsm4u6 X-HE-Tag: 1652945297-110678 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Apr 29, 2022 at 8:40 AM Miaohe Lin wrote: > > Think about the below scene: > CPU1 CPU2 > z3fold_page_migrate z3fold_map > z3fold_page_trylock > ... > z3fold_page_unlock > /* slots still points to old zhdr*/ > get_z3fold_header > get slots from handle > get old zhdr from slots > z3fold_page_trylock > return *old* zhdr > encode_handle(new_zhdr, FIRST|LAST|MIDDLE) > put_page(page) /* zhdr is freed! */ > but zhdr is still used by caller! > > z3fold_map can map freed z3fold page and lead to use-after-free bug. > To fix it, we add PAGE_MIGRATED to indicate z3fold page is migrated > and soon to be released. So get_z3fold_header won't return such page. > > Fixes: 1f862989b04a ("mm/z3fold.c: support page migration") > Signed-off-by: Miaohe Lin > --- > mm/z3fold.c | 16 ++++++++++++---- > 1 file changed, 12 insertions(+), 4 deletions(-) > > diff --git a/mm/z3fold.c b/mm/z3fold.c > index a7769befd74e..f41f8b0d9e9a 100644 > --- a/mm/z3fold.c > +++ b/mm/z3fold.c > @@ -181,6 +181,7 @@ enum z3fold_page_flags { > NEEDS_COMPACTING, > PAGE_STALE, > PAGE_CLAIMED, /* by either reclaim or free */ > + PAGE_MIGRATED, /* page is migrated and soon to be released */ > }; > > /* > @@ -270,8 +271,13 @@ static inline struct z3fold_header *get_z3fold_header(unsigned long handle) > zhdr = (struct z3fold_header *)(addr & PAGE_MASK); > locked = z3fold_page_trylock(zhdr); > read_unlock(&slots->lock); > - if (locked) > - break; > + if (locked) { > + struct page *page = virt_to_page(zhdr); > + > + if (!test_bit(PAGE_MIGRATED, &page->private)) > + break; > + z3fold_page_unlock(zhdr); > + } > cpu_relax(); > } while (true); > } else { > @@ -389,6 +395,7 @@ static struct z3fold_header *init_z3fold_page(struct page *page, bool headless, > clear_bit(NEEDS_COMPACTING, &page->private); > clear_bit(PAGE_STALE, &page->private); > clear_bit(PAGE_CLAIMED, &page->private); > + clear_bit(PAGE_MIGRATED, &page->private); > if (headless) > return zhdr; > > @@ -1576,7 +1583,7 @@ static int z3fold_page_migrate(struct address_space *mapping, struct page *newpa > new_zhdr = page_address(newpage); > memcpy(new_zhdr, zhdr, PAGE_SIZE); > newpage->private = page->private; > - page->private = 0; > + set_bit(PAGE_MIGRATED, &page->private); > z3fold_page_unlock(zhdr); > spin_lock_init(&new_zhdr->page_lock); > INIT_WORK(&new_zhdr->work, compact_page_work); > @@ -1606,7 +1613,8 @@ static int z3fold_page_migrate(struct address_space *mapping, struct page *newpa > > queue_work_on(new_zhdr->cpu, pool->compact_wq, &new_zhdr->work); > > - clear_bit(PAGE_CLAIMED, &page->private); > + /* PAGE_CLAIMED and PAGE_MIGRATED are cleared now. */ > + page->private = 0; > put_page(page); > return 0; > } Reviewed-by: Vitaly Wool > -- > 2.23.0 >