From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D32AC11F69 for ; Thu, 1 Jul 2021 05:58:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CF194614A7 for ; Thu, 1 Jul 2021 05:58:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CF194614A7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linuxfoundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2422C8D0293; Thu, 1 Jul 2021 01:58:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F2C28D028E; Thu, 1 Jul 2021 01:58:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0BB598D0293; Thu, 1 Jul 2021 01:58:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0174.hostedemail.com [216.40.44.174]) by kanga.kvack.org (Postfix) with ESMTP id DED658D028E for ; Thu, 1 Jul 2021 01:58:40 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 965FD2201F for ; Thu, 1 Jul 2021 05:58:40 +0000 (UTC) X-FDA: 78312964800.06.84644DF Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf11.hostedemail.com (Postfix) with ESMTP id 14682F0000B3 for ; Thu, 1 Jul 2021 05:58:39 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 96ECE6144F; Thu, 1 Jul 2021 05:58:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1625119119; bh=YM4h91lQVhhKo9kyJdMeS5B6QjU+m6LOaBDhjy1qm5M=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=wESa4qrnvjgKLn65W3rWr0Wo7Aa4l/llDw6FHRNfKmdcFYkpSVmJvcsdI6nc/Jx7b xcnQd2P1TjlDWHmtAQVJG+AwOaGg5sfU2YOd9ohObza17xg1LHiy2kcCQcY3xdmyFv QNC9d81sIdNO2EiUNyP27tfBgeIp19cF/NbbF+1w= Date: Thu, 1 Jul 2021 07:58:36 +0200 From: "gregkh@linuxfoundation.org" To: =?utf-8?B?6raM7Jik7ZuI?= Cc: Matthew Wilcox , "akpm@linux-foundation.org" , "konrad.wilk@oracle.com" , "ohkwon1043@gmail.com" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" Subject: Re: [PATCH] mm: cleancache: fix potential race in cleancache apis Message-ID: References: <20210630073310epcms1p2ad6803cfd9dbc8ab501c4c99f799f4da@epcms1p2> <20210701050644epcms1p5ceaf654fdabec4a126081f9edcbb3fff@epcms1p5> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20210701050644epcms1p5ceaf654fdabec4a126081f9edcbb3fff@epcms1p5> Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=linuxfoundation.org header.s=korg header.b=wESa4qrn; dmarc=pass (policy=none) header.from=linuxfoundation.org; spf=pass (imf11.hostedemail.com: domain of gregkh@linuxfoundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org X-Stat-Signature: psdcm4kgpah1sh4ifikgxogktu5eidr7 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 14682F0000B3 X-HE-Tag: 1625119119-116190 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jul 01, 2021 at 02:06:44PM +0900, =EA=B6=8C=EC=98=A4=ED=9B=88 wro= te: > On Thu, Jul 1, 2021 at 02:06:45PM +0900, =EA=B6=8C=EC=98=A4=ED=9B=88 wr= ote: > > On Wed, Jun 30, 2021 at 12:26:57PM +0100, Matthew Wilcox wrote: > > > On Wed, Jun 30, 2021 at 10:13:28AM +0200, gregkh@linuxfoundation.or= g wrote: > > > > On Wed, Jun 30, 2021 at 04:33:10PM +0900, =EA=B6=8C=EC=98=A4=ED=9B= =88 wrote: > > > > > Current cleancache api implementation has potential race as fol= lows, > > > > > which might lead to corruption in filesystems using cleancache. > > > > >=20 > > > > > thread 0 thread 1 thread = 2 > > > > >=20 > > > > > in put_page > > > > > get pool_id K for fs1 > > > > > invalidate_fs on fs1 > > > > > frees pool_id K > > > > > init_fs= for fs2 > > > > > allocat= es pool_id K > > > > > put_page puts page > > > > > which belongs to fs1 > > > > > into cleancache pool for fs2 > > > > >=20 > > > > > At this point, a file cache which originally belongs to fs1 mig= ht be > > > > > copied back to cleancache pool of fs2, which might be later use= d as if > > > > > it were normal cleancache of fs2, and could eventually corrupt = fs2 when > > > > > flushed back. > > > > >=20 > > > > > Add rwlock in order to synchronize invalidate_fs with other cle= ancache > > > > > operations. > > > > >=20 > > > > > In normal situations where filesystems are not frequently mount= ed or > > > > > unmounted, there will be little performance impact since > > > > > read_lock/read_unlock apis are used. > > > > >=20 > > > > > Signed-off-by: Ohhoon Kwon > > > >=20 > > > > What commit does this fix? Should it go to stable kernels? > > >=20 > > > I have a commit I haven't submitted yet with this changelog: > > >=20 > > > Remove cleancache > > >=20 > > > The last cleancache backend was deleted in v5.3 ("xen: remove t= mem > > > driver"), so it has been unused since. Remove all its filesyst= em hooks. > > >=20 > > > Signed-off-by: Matthew Wilcox (Oracle) > > =20 > > That's even better! > > =20 > > But if so, how is the above reported problem even a problem if no one= is > > using cleancache? > > =20 > > thanks, > > =20 > > greg k-h > >=20 > Dear all. >=20 > We are using Cleancache APIs for our proprietary feature in Samsung. > As Wilcox mentioned, however, there is no cleancache backend in current= kernel > mainline. > So if the race patch shall be accepted, then it seems unnecessary to pa= tch=20 > previous stable kernels. >=20 > Meanwhile, I personally think cleancache API still has potential to be = a good > material when used with new arising technologies such as pmem or NVMe. >=20 > So I suggest to postpone removing cleancache for a while. If there are no in-kernel users, it needs to be removed. If you rely on this, wonderful, please submit your code as soon as possible. thanks, greg k-h