From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A9F5C433EF for ; Wed, 11 May 2022 18:01:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E01246B0073; Wed, 11 May 2022 14:01:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DAF476B0074; Wed, 11 May 2022 14:01:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C29D66B0075; Wed, 11 May 2022 14:01:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id B2F926B0073 for ; Wed, 11 May 2022 14:01:05 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 9012230C3B for ; Wed, 11 May 2022 18:01:05 +0000 (UTC) X-FDA: 79454228490.06.80146F3 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) by imf01.hostedemail.com (Postfix) with ESMTP id 03CF4400B5 for ; Wed, 11 May 2022 18:00:52 +0000 (UTC) Received: by mail-pg1-f178.google.com with SMTP id q76so2429060pgq.10 for ; Wed, 11 May 2022 11:01:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=x6ZYfx/FmTGeNp3axjnUCK+9peN+HxRl/Crr+HjHafo=; b=kdxcJwBXu9IAQwLPKaeCqP7YPp/1DBAqxphMpSVyqGIqOa0OSxCCNVw8XUUQLIk8Iq RmqMcKeTnf3XhIKp34f5vPjEPHY+pI+zBYIX7nUKXY4ruJR3uRxJ2gdDcFsX6Nvc/jqE 6xuYNWxiDv9gtug8krZRcaETvQGEKG7avfoTf4IzT4viPi2ryVvtHDzL9l+/e1iRi38E Fwgg3ktWJTVcSHJ5fLByxGNbVzPQBYrW/vWgnVG3mZxTgZqxsO4BQiRLafCfvEubxyYu OU5++CwStDqEjRdoqB8LhUOykxzPMGvgGcnLdiTPBtpaN+NIo9u37frPazQWFkRGMoQh 7/gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=x6ZYfx/FmTGeNp3axjnUCK+9peN+HxRl/Crr+HjHafo=; b=HTivsVltM2ZDjvMnmpj4L5206a1j7bgSalOpEduqnsiqx4WtpgnTTcbygANN7lF+px rtgEefvTmvVt2es0yuaTagXdZBBedZKHd3JYUNSJ80mSM+3ICHQydKKfB24HrzuT4hSb VEFVBYDvOwWxtEI3cJzvdvKOSyAbJvKXr5jWOyS33hdLQKZDAI1oEGk9e0fjRzgTsVcH t/zHWpDqQvUIaQ44mdtW2bHJ4D+9d6NyxpQr4uTwz47xvlpPEQv68aRvQbnZjmrSq3NX BJmrUdiJpFBbrXWGftnCwEuM4TrAlq5dixEdJqW8tDr8mpoxB9hYgyRTZhqjh4IfqA2E 2TKQ== X-Gm-Message-State: AOAM5330z8zJ//XJ2olqPzelL4lGL9b46RSFTDmwuU0PCZ2K8wOkO/BS VV5WczzMqxaWapurxwzXo04= X-Google-Smtp-Source: ABdhPJxLiD0hYeL6U2Xb/mgBAeEQ9rqMsPYLjeZwvpkGd/pYpUn7M+ZShhJi3ZLNLXB8LHNzFIIM4w== X-Received: by 2002:a65:4c8e:0:b0:3aa:24bf:9e63 with SMTP id m14-20020a654c8e000000b003aa24bf9e63mr21840616pgt.592.1652292063854; Wed, 11 May 2022 11:01:03 -0700 (PDT) Received: from google.com ([2620:15c:211:201:69ef:9c87:7816:4f74]) by smtp.gmail.com with ESMTPSA id y3-20020a626403000000b0050dc76281b6sm2130000pfb.144.2022.05.11.11.01.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 May 2022 11:01:03 -0700 (PDT) Date: Wed, 11 May 2022 11:01:01 -0700 From: Minchan Kim To: Sultan Alsawaf Cc: stable@vger.kernel.org, Nitin Gupta , Sergey Senozhatsky , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration Message-ID: References: <20220509024703.243847-1-sultan@kerneltoast.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220509024703.243847-1-sultan@kerneltoast.com> X-Rspamd-Queue-Id: 03CF4400B5 X-Stat-Signature: p7mu6jn3i98rx5ujra9jnsdc9mih8xg7 X-Rspam-User: Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=kdxcJwBX; spf=pass (imf01.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.215.178 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-Rspamd-Server: rspam09 X-HE-Tag: 1652292052-299694 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, May 08, 2022 at 07:47:02PM -0700, Sultan Alsawaf wrote: > From: Sultan Alsawaf > > The asynchronous zspage free worker tries to lock a zspage's entire page > list without defending against page migration. Since pages which haven't > yet been locked can concurrently migrate off the zspage page list while > lock_zspage() churns away, lock_zspage() can suffer from a few different > lethal races. It can lock a page which no longer belongs to the zspage and > unsafely dereference page_private(), it can unsafely dereference a torn > pointer to the next page (since there's a data race), and it can observe a > spurious NULL pointer to the next page and thus not lock all of the > zspage's pages (since a single page migration will reconstruct the entire > page list, and create_page_chain() unconditionally zeroes out each list > pointer in the process). > > Fix the races by using migrate_read_lock() in lock_zspage() to synchronize > with page migration. > > Cc: stable@vger.kernel.org > Fixes: 48b4800a1c6a ("zsmalloc: page migration support") Shouldn't the fix be Fixes: 77ff465799c6 ("zsmalloc: zs_page_migrate: skip unnecessary loops but not return -EBUSY if zspage is not inuse)? Because we didn't migrate ZS_EMPTY pages before. > Signed-off-by: Sultan Alsawaf > --- > mm/zsmalloc.c | 37 +++++++++++++++++++++++++++++++++---- > 1 file changed, 33 insertions(+), 4 deletions(-) > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c > index 9152fbde33b5..5d5fc04385b8 100644 > --- a/mm/zsmalloc.c > +++ b/mm/zsmalloc.c > @@ -1718,11 +1718,40 @@ static enum fullness_group putback_zspage(struct size_class *class, > */ > static void lock_zspage(struct zspage *zspage) > { > - struct page *page = get_first_page(zspage); > + struct page *curr_page, *page; > > - do { > - lock_page(page); > - } while ((page = get_next_page(page)) != NULL); > + /* > + * Pages we haven't locked yet can be migrated off the list while we're > + * trying to lock them, so we need to be careful and only attempt to > + * lock each page under migrate_read_lock(). Otherwise, the page we lock > + * may no longer belong to the zspage. This means that we may wait for > + * the wrong page to unlock, so we must take a reference to the page > + * prior to waiting for it to unlock outside migrate_read_lock(). I couldn't get the point here. Why couldn't we simple lock zspage migration? diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 9152fbde33b5..05ff2315b7b1 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1987,7 +1987,10 @@ static void async_free_zspage(struct work_struct *work) list_for_each_entry_safe(zspage, tmp, &free_pages, list) { list_del(&zspage->list); + + migrate_read_lock(zspage); lock_zspage(zspage); + migrate_read_unlock(zspage); get_zspage_mapping(zspage, &class_idx, &fullness); VM_BUG_ON(fullness != ZS_EMPTY); > + */ > + while (1) { > + migrate_read_lock(zspage); > + page = get_first_page(zspage); > + if (trylock_page(page)) > + break; > + get_page(page); > + migrate_read_unlock(zspage); > + wait_on_page_locked(page); > + put_page(page); > + } > + > + curr_page = page; > + while ((page = get_next_page(curr_page))) { > + if (trylock_page(page)) { > + curr_page = page; > + } else { > + get_page(page); > + migrate_read_unlock(zspage); > + wait_on_page_locked(page); > + put_page(page); > + migrate_read_lock(zspage); > + } > + } > + migrate_read_unlock(zspage); > } > > static int zs_init_fs_context(struct fs_context *fc) > -- > 2.36.0 >