From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C06FCC4345F for ; Wed, 1 May 2024 07:47:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4911D6B008A; Wed, 1 May 2024 03:47:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 440976B008C; Wed, 1 May 2024 03:47:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E0EF6B0092; Wed, 1 May 2024 03:47:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0ED916B008A for ; Wed, 1 May 2024 03:47:55 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B4523120B16 for ; Wed, 1 May 2024 07:47:54 +0000 (UTC) X-FDA: 82069048068.24.B434B1B Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com [209.85.215.169]) by imf19.hostedemail.com (Postfix) with ESMTP id 9DB371A0003 for ; Wed, 1 May 2024 07:47:51 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=fromorbit-com.20230601.gappssmtp.com header.s=20230601 header.b=AJ873bqY; dmarc=pass (policy=quarantine) header.from=fromorbit.com; spf=pass (imf19.hostedemail.com: domain of david@fromorbit.com designates 209.85.215.169 as permitted sender) smtp.mailfrom=david@fromorbit.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1714549671; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yxRd2H6e8mYNr1gSX+l0MwjfKp9IszVuWaKXuYO1IHo=; b=ztxiwR8SJBzVLY5XBfltsVM/V53pBOUPPXoknPgDDRwu+PyjnMn24QE8i8m92SkFee6iT4 1cVcuzld1PfaxIbdtwFT4guNVIkpzVbVU1jEhWNxg2s2u/TDN2p/qmNyiid10NJM2s6UQR dpK9J4KVx72498eh4CmwEWPrkqPhgCY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1714549671; a=rsa-sha256; cv=none; b=rec9vFvQ6bl5se/ES0mA++j5s0v8SSZV0Zy0tesX/3MzIuWP7wtd/i+YU/toKYd/9R6o+s TKYsBz96n+PiYPMo00oChM7JgjStbXQz5+/i50sZmIUVeyW8XDYjBPVWzYcpojp2hobXwR J3AVGOCIP/Io56xsf4RLF+FJmgDDTLE= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=fromorbit-com.20230601.gappssmtp.com header.s=20230601 header.b=AJ873bqY; dmarc=pass (policy=quarantine) header.from=fromorbit.com; spf=pass (imf19.hostedemail.com: domain of david@fromorbit.com designates 209.85.215.169 as permitted sender) smtp.mailfrom=david@fromorbit.com Received: by mail-pg1-f169.google.com with SMTP id 41be03b00d2f7-5e8470c1cb7so4466212a12.2 for ; Wed, 01 May 2024 00:47:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fromorbit-com.20230601.gappssmtp.com; s=20230601; t=1714549670; x=1715154470; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=yxRd2H6e8mYNr1gSX+l0MwjfKp9IszVuWaKXuYO1IHo=; b=AJ873bqYoYQDhlPjgaekKDN28hphoWj63fO91GmFIPIYizIW8/L2et66Jjs6ac2+M0 DNaw46qG62Wb6y3gDqB4gQP1+1RWt8KXeXlddDPtr91e9ZzfYKnfmkSEIVxlhVVpqmCZ AUFiEe52v20feUcewtqPHNVDAJ09FGKPCTTqD5lGZrC4tRQbUilD2V4sgLXtsPkUZSQl JReTBUMI+8fJiT+7g/Ub4C68EjEPV2vYUO+qISIGjoGRKYokRQ2Q4uPJbbvSsNv+60HL P2EmL/T6YUZ3BTIZTtFGO6n7CfK5cuvUKKnIXsXgVn+Q+OC1cI16HiE8VmgKbKrRqKxM eIRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714549670; x=1715154470; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=yxRd2H6e8mYNr1gSX+l0MwjfKp9IszVuWaKXuYO1IHo=; b=piddys9L8EXVI5cGySLbw90dStq+MBu/KSJIhXZztX0srjJUscCdM2Huj2d0lzI6Hj XTmMysJaRRIiD25kuHshwGFs3SIyYPhRAW0DXXnQ8IVzybdrvP6D/9u11OLvE9stBS8G /i+sHrUm/1Ccs6wRlLfxoCRl1Knw6H63KPdK/Xp+jV/p2H7U38ek3wt7BJLVbmtt/f3j lOS+IcPFZsnBkgvRGzU6yw03rRI0G1+lCaB216+7qd4HBjYrjSCBmX8eFpdGecv/g0iT DSR/wrM5YH4euARfkiEoa5aQR/pQN/wlwzn1Z7+TDkwRU18DclnTg/tTF4z8wmyz78ps lpKQ== X-Forwarded-Encrypted: i=1; AJvYcCXnDSYZdjBlVZp8sC7eGkV6WfVBG76IqvjTcc8kfCumxps8RBzvCTe+xZkL+vDyGJEXr5pUkld3zF3r86JE00x7fwA= X-Gm-Message-State: AOJu0YzqLYH4XIWg9h/pRoHBl2CUWq0a+ziXNSZ8DY1QUfbeVpJEO7Oz cVdrT+XZAvK2ccR3/kfTQjW71Kaqj+XSIXrIfhq646EBYdof3ozsxT4VjGrshJo= X-Google-Smtp-Source: AGHT+IHRHk+cbO0Uc//LojouzmzN7miUZLtGzemQFCemIXFiIyRsCHU84FSFC0a0Vh8aVGvtLAQz+g== X-Received: by 2002:a05:6a21:6d93:b0:1af:4c43:6ec9 with SMTP id wl19-20020a056a216d9300b001af4c436ec9mr2482794pzb.34.1714549670185; Wed, 01 May 2024 00:47:50 -0700 (PDT) Received: from dread.disaster.area (pa49-179-32-121.pa.nsw.optusnet.com.au. [49.179.32.121]) by smtp.gmail.com with ESMTPSA id i15-20020a170902c94f00b001eb30118300sm8446852pla.132.2024.05.01.00.47.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 May 2024 00:47:49 -0700 (PDT) Received: from dave by dread.disaster.area with local (Exim 4.96) (envelope-from ) id 1s24gt-00H82c-0F; Wed, 01 May 2024 17:47:47 +1000 Date: Wed, 1 May 2024 17:47:47 +1000 From: Dave Chinner To: Ritesh Harjani Cc: Zhang Yi , linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, hch@infradead.org, djwong@kernel.org, willy@infradead.org, zokeefe@google.com, yi.zhang@huawei.com, chengzhihao1@huawei.com, yukuai3@huawei.com, wangkefeng.wang@huawei.com Subject: Re: [PATCH v4 02/34] ext4: check the extent status again before inserting delalloc block Message-ID: References: <185a0d75-558e-a1ae-9415-c3eed4def60f@huaweicloud.com> <87cyqcyt6t.fsf@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87cyqcyt6t.fsf@gmail.com> X-Rspamd-Queue-Id: 9DB371A0003 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: 6eng1ysih7kydcm61m87rkt8hwyqbqrh X-HE-Tag: 1714549671-78431 X-HE-Meta: U2FsdGVkX18kV8cBoCA+tw7+B19GPe9BQMKmz/uwqY+NjVLt1ue9buNalfOGDmEXEtmt+FuWctnsdPrrMUVSoxZXx8ysfAMj0DUcDWNeg8SQRHL6RfwV6N/J94oT9HiDsoi1iZgnqRWXt2s+FOwOkptW7mv9q08BRFI3LHsWIDbCcrjwKrO8/gnc+LpEJr4aAZpa68InhWTOJAw2f+ejHKKfk2L79Q2qd82AtV8MqNK0RwOEAOTB3Y1fCUuWnToHSBLV0uXfeomRBvF3JVdd2JiiwUZ3MYRiBa2PXNG0dyB1Qu14zb7u16jppIiREboT9BW+tkEaksxYp7DnyDhuF6EsVE+BrgK2sB1HxIvahk9mtL2/IPJ3wdvL2rW9k03P9cpUOJrKzULym38ZTorNcMLjlemayKKrkvTCA3zJ/JIYpYjBqLolz1gqIPUx/+28wtRew+USA0uXxyglroxAPt1AkssDL7nkbMOEJu1Gfrk3VatB10B0b3WNcfhzcg93SpuCIbmgTun0wkWaxRmuz2OK8oJYUN497VH8pKHRkFbKMCAVCC1PNrmxo3/RS3N3H+5Dhu6L8iKHVghOf+lJyH+jTUrlHpDCSZ5VpPOS6oQPKV+4GUKAewOHuq4rbRZxAom1d8ojQVbHrcIQXKi94pDgZ8AZ+Xyk3cOGi4AwmAF6+MlvQ+mrh17LuFMyHLbpL2mDW+okLj61qlD7uSBg/g8P0WzlMOIB1Nl2SjJ4RadxS4yaW1VyMDXrn1fRxa4y8+8dgam5mPNxZiqq/F4LvQKDLu5/iCF2lxuvi2Su3sMk/3kcoonUVBLeL5H1xebagYrsRRyb7qlgK+OFtGEQHhcMsXbB/+qwV9bsr/iNtVs33eRztdSvPMu6anKjWDbn74kqXfq6K2FZg6f+S61CuuBbaP3SebNAVBrpXO0NBatV0AkeOna3mEn5QbS0kobsHji0qTwUD3NV5j955oN VHVsALkm xGeJA0Kt1/P6MAxLto6fGcqpeFy+cy2LZJ6XTcmUQAzG5VZ3QWfNPtk3/Y3wEJgqtyAAadL5RooIAqwTODNkDdyUibPrNfSPsDxBND55WRlJq3zzTmc1V6xgdSNIyHUQ6G3/VtID9muhhJybfPkHtfsaBrd6gv3RiDQg8HB+PeUAnFnvHLnPu4vja+CvCkQp1YvNkwtGZPVvwBA1LPGFIaXKdapqf7tIMwLi69i/Rg9FnCcJs40+xzRzhG4pMJWt+4vam12gjdRwBeW8bZyqsmeY1ZMhTsEheds16OiqSoc32bd+V0at/zGu/ypZPPDYDeNOxWvwZ/dzOtGOSTQYSwlN0dKeEoLCuno58mxiz7DY0UxXkj3dwicwtOb4D541xyAXCeOCwrkgFp7+kOl1ClaR6DdXJ3nt8aaSU5LW5h7snGltJu5jCe9OERB2pLaAf6AhEqiLGzFWLTQDjZN55UHEkTcpUWYBfDG/yzi9xcXfZfCZQAuMWYr+5RbA4o3qqvndedRxH9JwIjCI3tTkyqKr3y9ZP2m168T3WUgHwguIEySfLR2Gp7em/Q7lS6Insn2h1cmfe+w+zOkhnHlhEwLoy71s4CVYjUW8ofFVakuO08ZVfiQJPhgOVevStPifP5l9boEc/mxpZb+jfbi86lF+/5LpPYVYM9eUeTqMFAUuj/XqBm37Lz3CjAGfsaeohko9KP2s3QnySkuYWZ57KGp1DivLxL2ThDSHIPQUb/s+6ZTE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Apr 26, 2024 at 10:09:22PM +0530, Ritesh Harjani wrote: > Zhang Yi writes: > > > On 2024/4/26 20:57, Ritesh Harjani (IBM) wrote: > >> Ritesh Harjani (IBM) writes: > >> > >>> Zhang Yi writes: > >>> > >>>> From: Zhang Yi > >>>> > >>>> Now we lookup extent status entry without holding the i_data_sem before > >>>> inserting delalloc block, it works fine in buffered write path and > >>>> because it holds i_rwsem and folio lock, and the mmap path holds folio > >>>> lock, so the found extent locklessly couldn't be modified concurrently. > >>>> But it could be raced by fallocate since it allocate block whitout > >>>> holding i_rwsem and folio lock. > >>>> > >>>> ext4_page_mkwrite() ext4_fallocate() > >>>> block_page_mkwrite() > >>>> ext4_da_map_blocks() > >>>> //find hole in extent status tree > >>>> ext4_alloc_file_blocks() > >>>> ext4_map_blocks() > >>>> //allocate block and unwritten extent > >>>> ext4_insert_delayed_block() > >>>> ext4_da_reserve_space() > >>>> //reserve one more block > >>>> ext4_es_insert_delayed_block() > >>>> //drop unwritten extent and add delayed extent by mistake > >>>> > >>>> Then, the delalloc extent is wrong until writeback, the one more > >>>> reserved block can't be release any more and trigger below warning: > >>>> > >>>> EXT4-fs (pmem2): Inode 13 (00000000bbbd4d23): i_reserved_data_blocks(1) not cleared! > >>>> > >>>> Hold i_data_sem in write mode directly can fix the problem, but it's > >>>> expansive, we should keep the lockless check and check the extent again > >>>> once we need to add an new delalloc block. > >>> > >>> Hi Zhang, > >>> > >>> It's a nice finding. I was wondering if this was caught in any of the > >>> xfstests? > >>> > > > > Hi, Ritesh > > > > I caught this issue when I tested my iomap series in generic/344 and > > generic/346. It's easy to reproduce because the iomap's buffered write path > > doesn't hold folio lock while inserting delalloc blocks, so it could be raced > > by the mmap page fault path. But the buffer_head's buffered write path can't > > trigger this problem, > > ya right! That's the difference between how ->map_blocks() is called > between buffer_head v/s iomap path. In iomap the ->map_blocks() call > happens first to map a large extent and then it iterate over all the > locked folios covering the mapped extent for doing writes. Yes - a fundamental property of the iomap is that it is cached filesystem state that isn't protected by locks in any way. It can become stale if a concurrent operation modifies the extent map whilst the write operation is progressing. Have a look at iomap_begin_write(). Specifically: /* * Now we have a locked folio, before we do anything with it we need to * check that the iomap we have cached is not stale. The inode extent * mapping can change due to concurrent IO in flight (e.g. * IOMAP_UNWRITTEN state can change and memory reclaim could have * reclaimed a previously partially written page at this index after IO * completion before this write reaches this file offset) and hence we * could do the wrong thing here (zero a page range incorrectly or fail * to zero) and corrupt data. */ if (folio_ops && folio_ops->iomap_valid) { bool iomap_valid = folio_ops->iomap_valid(iter->inode, &iter->iomap); if (!iomap_valid) { iter->iomap.flags |= IOMAP_F_STALE; status = 0; goto out_unlock; } } Yup, there's the hook to detect stale cached iomaps. The struct iomap has a iomap->validity_cookie in it, which is an opaque cookie set by the filesytem when it creates the iomap. Here we have locked the folio so guaranteed exclusive access to this file range, and so we pass the iomap with it's cookie back to the filesystem to determine if the iomap is still valid. XFS uses generation numbers in the extent tree to determine if the cached iomap is still valid. ANy change to the extent tree bumps the generation number, and the current generation number is placed in iomap->validity_cookie when the iomap is created. If the generation number on the inode extent tree is different to the number held in the validity_cookie, then the extent tree has changed and the iomap must be considered stale. The iomap iterator then sees IOMAP_F_STALE and generates a new iomap for the remaining range of the write operation. Writeback has the same issue - the iomap_writepage_ctx caches the iomap we obtained for the current writeback, and so if something else changes the extent state while writeback is underway, then that map is stale and needs to be refetched. XFS does this by wrapping the iomap_writepage_ctx with a xfs_writepage_ctx that holds generation numbers so that when writeback calls iomap_writeback_ops->map_blocks(), it can check that the cached iomap is still valid, same as we do in iomap_begin_write(). > Whereas in buffer_head while iterating, we first instantiate/lock the > folio and then call ->map_blocks() to map an extent for the given folio. > > ... So this opens up this window for a race between iomap buffered write > path v/s page mkwrite path for inserting delalloc blocks entries. iomap allows them to to race - the filesystem extent tree needs it's own internal locking to serialise lookups and modifications of the extent tree, whilst the data modifications and page cache state changes are serialised by the folio lock. That's why iomap_begin_write() checks that the iomap is still valid only after it has a locked folio it is ready to write data into. Remeber that delalloc extents need to be inserted into the filesystem internal tree when ->iomap_begin() creates them. Hence anything that races to write over that same range range will only create the delalloc extent once - the second operation will simply find the existing delalloc extent the first operation created... > > the race between buffered write path and fallocate path > > was discovered while I was analyzing the code, so I'm not sure if it could > > be caught by xfstests now, at least I haven't noticed this problem so far. > > > > Did you mean the race between page fault path and fallocate path here? > Because buffered write path and fallocate path should not have any race > since both takes the inode_lock. I guess you meant page fault path and > fallocate path for which you wrote this patch too :) > > I am surprised, why we cannot see the this race between page mkwrite and > fallocate in fstests for inserting da entries to extent status cache. Finding workloads that hit these sorts of races reliably is -real hard-. Read the commit message in commit d7b64041164c ("iomap: write iomap validity checks"), especially this link: https://lore.kernel.org/linux-xfs/20220817093627.GZ3600936@dread.disaster.area/ And this comment I made in a followup email: " [....] and it points out that every filesystem using iomap for multi-page extent maps will need to implement iomap invalidation detection in some way." > Because the race you identified looks like a legitimate race and is > mostly happening since ext4_da_map_blocks() was not doing the right > thing. > ... looking at the src/holetest, it doesn't really excercise this path. > So maybe we can writing such fstest to trigger this race. We have a regression test that exercises folio_ops->iomap_valid() functionality: xfs/559. It uses the XFS error injection infrastructure to add a strategic delay which we placed in xfs_iomap_valid() so that we can hold an iomap cached for an arbitrary period of time to allow writeback and page cache reclaim to do their stuff to cause the extent map held by the write to become stale. It also uses ftrace to capture the tracepoint that tells us that the invalid iomap state was seen and IOMAP_F_STALE behaviour triggered. This could be turned into a generic test, but there's a lot of missing infrastructure bits to do it.... Cheers, Dave. -- Dave Chinner david@fromorbit.com