From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5725AC54798 for ; Thu, 29 Feb 2024 21:21:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E23F06B008C; Thu, 29 Feb 2024 16:21:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DD1CF6B009F; Thu, 29 Feb 2024 16:21:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C99CD6B00A0; Thu, 29 Feb 2024 16:21:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B9E316B008C for ; Thu, 29 Feb 2024 16:21:17 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 95A301610E9 for ; Thu, 29 Feb 2024 21:21:17 +0000 (UTC) X-FDA: 81846112194.24.D018732 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by imf25.hostedemail.com (Postfix) with ESMTP id B3E2AA000E for ; Thu, 29 Feb 2024 21:21:15 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=kinbTGBe; spf=pass (imf25.hostedemail.com: domain of ritesh.list@gmail.com designates 209.85.214.175 as permitted sender) smtp.mailfrom=ritesh.list@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709241675; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:dkim-signature; bh=h5XPcqp74ZmZe2+sDNenmNkvi+dwchPiHeXu7Dqjqzk=; b=C4MVmij9t3iI9Apv3rPLg+VeiJWDV7qN8Z+YL1T4QmGCIUNB4td+6Pm5XokV07mOK3bD6c U1SRfX7z3mxRv97Bfg2i8F0WfP92G5Hh2NkLy3zIHB3RCM9Qlx9pGYwpvcTI5zSvhM+mCb 5+HA1Uq+SRHE5SUgXOHFNtiIlbQWkQk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709241675; a=rsa-sha256; cv=none; b=hhj31hnGT9/JlgdUGLZKPT1BPApAosokYCre5FKDVgMTYpyZy/ofhnrdNv3kdVulbvAYGA NQBxkqUvr0r2s7dUMvdhZjEABF1vxg8ceZ+U3e7mC4sDv9xTfw6/oM0bXs3hxtCwPGan6g wcvFZpOYWK4BSkqwjpmhZwB9GhTypfY= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=kinbTGBe; spf=pass (imf25.hostedemail.com: domain of ritesh.list@gmail.com designates 209.85.214.175 as permitted sender) smtp.mailfrom=ritesh.list@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-1dc13fb0133so11946365ad.3 for ; Thu, 29 Feb 2024 13:21:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709241673; x=1709846473; darn=kvack.org; h=content-transfer-encoding:mime-version:in-reply-to:subject:cc:to :from:message-id:date:from:to:cc:subject:date:message-id:reply-to; bh=h5XPcqp74ZmZe2+sDNenmNkvi+dwchPiHeXu7Dqjqzk=; b=kinbTGBeOxJfB2l6AwAGxdWK6Y+WWwU0RcdcVk+5iHMBidv9oz3Uw7Nf2yiO8R/b5A 6QNSKJPKv0K5/V5vqEO3b8pvUizRs98AG35SLKLLS+BISvSuLpxOUvTCOZpuuIY+jRiE 3zT9OXp51JMBaEcH29NC35nP9qHFf852zB7olva6CiNuCfThJOfcJNBrJyZWQ/uO92Rw moOfrbRR4e+OLB0MnwufBdRHpnnBphcDrvy7HzuYhphLDrjrCoHz+JaNdw3wH6ngyBlB 0HyU8Z+KohbxPgQIIOgS+onWXt9380ZwovcrpA0rAv2z1TzyPnuoGXMxMPTFA+t/PsTH 2ElA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709241673; x=1709846473; h=content-transfer-encoding:mime-version:in-reply-to:subject:cc:to :from:message-id:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=h5XPcqp74ZmZe2+sDNenmNkvi+dwchPiHeXu7Dqjqzk=; b=sJ/eidcCvPYrelTjU/Foyd/Nga4C2DFDHLgN71NKEA5PJtQ2xN+zV+cX0iDOrRN1Yp Ng731sHzyjjkbgz12txOYe7CZV8mgWi59VbA9/sSEpLOjB6dciSrW5+omOS+VXYpy1Jw jrQ5TnMoC8C9cZSFC5w2Qn3ExDdHDDfz6NGHOB8D5gU5BN+gd0gW/IHTOhcjFlraS6yE vjKyYULiQt97Uz+Sy8UzK45+NbQLzLullzvODQFJDCvgWLIBKvFDE6Qsaq8oTP+JBjdC bnbe+NUmFob7GD9vj1oUmbTzdmZs3JZXCv6RpWa8GG/ViriaI9K+RufSt8muuJBFelbl a2PQ== X-Forwarded-Encrypted: i=1; AJvYcCWY2IUxuQWFHhKDTu35gYfVoobcmIKZQjw+VILHu2zx9BO+lS9N10o9HhnNRjLRM4pBSYNl7TuvcY6eo6sx+FxapWw= X-Gm-Message-State: AOJu0YwHBAZmZJ3AB5h5lHpsbby7X5RcgpnrEi2UU+5PThYOvd2r1dA6 XAxni+n+eMKRwVpx17c1SScip5TNdzaCNFzInh1wZhgBkEtKWPKamgj1acRc X-Google-Smtp-Source: AGHT+IHFAu7l+ocH9PpJerHbAVdhv4CKKrm5Xk4IxGkDZZHnkhsTsMflqys+q4AP8qtstioWUNhp3Q== X-Received: by 2002:a17:903:493:b0:1dc:d6ba:ed4c with SMTP id jj19-20020a170903049300b001dcd6baed4cmr2944850plb.2.1709241673416; Thu, 29 Feb 2024 13:21:13 -0800 (PST) Received: from dw-tp ([49.205.218.89]) by smtp.gmail.com with ESMTPSA id d11-20020a170903208b00b001dcc109a621sm1967934plc.184.2024.02.29.13.21.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Feb 2024 13:21:12 -0800 (PST) Date: Fri, 01 Mar 2024 02:51:08 +0530 Message-Id: <8734tb0xx7.fsf@doe.com> From: Ritesh Harjani (IBM) To: John Garry , Theodore Ts'o Cc: lsf-pc@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-mm Subject: Re: [LSF/MM/BPF TOPIC] untorn buffered writes In-Reply-To: MIME-version: 1.0 Content-type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: B3E2AA000E X-Rspam-User: X-Stat-Signature: hmro1opffjxftq9yfcfnut68bied3t4b X-Rspamd-Server: rspam03 X-HE-Tag: 1709241675-645303 X-HE-Meta: U2FsdGVkX1/4fxnxxu6ZHBgGtDUJIA2TabCnrY9cb+uYa62w5qMTeCaK/xwjdUsXyeS5txi1l8agrZK9U/HFuOf8EARyS56yS/gXecHR97PuW/mwq06OcQ/dXckamvXRjSUAADlCFgnBY1QthXk2PO3zH0SdD33rMCqCPG236fVSi7jMkBPY7FKnjEjm1qNlQbBKlDKatMp6xPZk1WSVVeeMAYIQb0IxzGLM3kJ9G1sS2xQWFz4IhHp932v6bumGztlZxz8cv8Xf9c5KPAUbxCUJkaXhtTisVhR393zwHvoW+ZMpxIgyYxSi3cPjGa/05REfG0Os+y1RcZ7hqsTmNtJ2/MXPRXDFnugudFHGol6d5x7zLYiE7jdHXzqrIJ1VwhPn0M1Pq6f2bC5ATXJ3xDQ2aUg2bLXQI+jm2P9Xox+TJZ5l4zHyq17YODg6TERoK/NShPI6t/FKEouFJuJhRKIPnppD8Duehkfc0L4mmEogr6Q79yOjBIWc74x8gWRqsr3yfKjxymhbBoySdmbI/TUFGiyaOYVpfKAqG9cxSJGl0I7b3LJv1AqXUu8ixL4gUqZkf3rx2PxBW8Saiq3GLlL422UddoBckmpK6dqVcbpmncz6cetK8wzbZ9M3aaT8N2CD30nO5MFocPQxf04j+4T5h4TQrjEhtfjZ5oMFi0cqZF+RLWgkUDSPaZxaLL10ev2QC7Msjp8Hm/En53ATmQFzKrlzulp/1VhnPZ/4wFVyzZWJ7F2kozvflyry9SBLRZc0d3nwoRBvsDrAIN/X4y+YjzlfWhA1lV4m/hjLpj9vftCSj6/lhazAqnfewzqHH/B9VBHjvhlA1Ct++PhqX4iOYm5eTv29mX178gGWF0Fiv/fxayTckyoyevs8c/kgXRiRSeplFomv/rCSVhnR40Ms4BD/V+448TGQLbgc2yQBc9PhdjRrB2jlTOOpRe4DO1KHBKd2aoh8njtMGvz KIEoo+nr C8QmJs1Hk/SbKPl0Fx3WiGmthvmnuEkVVgRbJmTFuJPsIZJnPTqwR97pGvYP6I9DGfYWevISkm8fCcwgIICTnYBQrZ04Y6mr4yD6zpdvOYpSjnN8+ucUDx5+75aoGhUaewI/g+8STLJn4qA4Ew6PRPJwbCSbAh9vjxxWQuQCAOtB2LVP/16012YPF+gbXzxjiLj5ri6quGYhCEHd/Zy2nHPchuzN9uHwXhiXfqRjY0f3kE9tsGkzfdBhIP+kSdBjtceS/auXTMUBWo0EQAXxKG06IDyAEffuAG42vv1OAhcpMU0gpEdeUNCn2V8rWj7lQhlMMJfmPMc76kTGB7h6qRe6PcXjq960Boqk30PR/NbUKOyaO8nJaMlRMHR4S8KM5ywrqxpnext+QrciDYvh5uHQfQY20GRIHKBAenZXyWUVhLCn5Hlvo7m4Q3ccqtrv8+846o0KCgNPEPB5/DsM7iJKb1xM61GgHoj4s8LTNZXyXLhV+Rpu0L8A3Ql1rfq5Juvj08Fx/x6hvcYE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: John Garry writes: > On 28/02/2024 23:24, Theodore Ts'o wrote: >> On Wed, Feb 28, 2024 at 04:06:43PM +0000, John Garry wrote: >>> Note that the initial RFC for my series did propose an interface that does >>> allow a write to be split in the kernel on a boundary, and that boundary was >>> evaluated on a per-write basis by the length and alignment of the write >>> along with any extent alignment granularity. >>> >>> We decided not to pursue that, and instead require a write per 16K page, for >>> the example above. >> Yes, I did see that. And that leads to the problem where if you do an >> RWF_ATOMIC write which is 32k, then we are promising that it will be >> sent as a single 32k SCSI or NVMe request > > We actually guarantee that it will be sent as part of a single request > which is at least 32K, as we may merge atomic writes in the block layer. > But that's not so important here. > >> --- even though that isn't >> required by the database, > > Then I do wonder why the DB is asking for some 32K of data to be written > with no-tears guarantee. Convenience, I guess. > >> the API is*promising* that we will honor >> it. But that leads to the problem where for buffered writes, we need >> to track which dirty pages are part of write #1, where we had promised >> a 32k "atomic" write, which pages were part of writes #2, and #3, >> which were each promised to be 16k "atomic writes", and which pages >> were part of write #4, which was promised to be a 64k write. If the >> pages dirtied by writes #1, #2, and #3, and #4 are all contiguous, how >> do we know what promise we had made about which pages should be >> atomically sent together in a single write request? Do we have to >> store all of this information somewhere in the struct page or struct >> folio? >> >> And if we use Matthew's suggestion that we treat each folio as the >> atomic write unit, does that mean that we have to break part or join >> folios together depending on which writes were sent with an RWF_ATOMIC >> write flag and by their size? >> >> You see? This is why I think the RWF_ATOMIC flag, which was mostly > >> harmless when it over-promised unneeded semantics for Direct I/O, is >> actively harmful and problematic for buffered I/O. >> >>> If you check the latest discussion on XFS support we are proposing something >>> along those lines: >>> https://urldefense.com/v3/__https://lore.kernel.org/linux-fsdevel/Zc1GwE*2F7QJisKZCX@dread.disaster.area/__;JQ!!ACWV5N9M2RV99hQ!IlGiuVKB_rW6nIXKv1iGSM4FrX-9ehXa4NF-nvpP5MNsycQLKCcKmRgmKEFgT8hoo7rfN8EhOzwWoDrA$ >>> >>> There FS_IOC_FSSETXATTR would be used to set extent size w/ fsx.fsx_extsize >>> and new flag FS_XGLAG_FORCEALIGN to guarantee extent alignment, and this >>> alignment would be the largest untorn write granularity. >>> >>> Note that I already got push back on using fcntl for this. >> There are two separable untorn write granularity that you might need to >> set, One is specifying the constraints that must be required for all >> block allocations associated with the file. This needs to be >> persistent, and stored with the file or directory (or for the entire >> file system; I'll talk about this option in a moment) so that we know >> that a particular file has blocks allocated in contiguous chunks with >> the correct alignment so we can make the untorn write guarantee. >> Since this needs to be persistent, and set when the file is first >> created, that's why I could imagine that someone pushed back on using >> fcntl(2) --- since fcntl is a property of the file descriptor, not of >> the inode, and when you close the file descriptor, nothing that you >> set via fcntl(2) is persisted. >> >> However, the second untorn write granularity which is required for >> writes using a particular file descriptor. And please note that these >> two values don't necessarily need to be the same. For example, if the >> first granularity is 32k, such that block allocations are done in 32k >> clusters, aligned on 32k boundaries, then you can provide untorn write >> guarantees of 8k, 16k, or 32k ---- so long as (a) the file or block >> device has the appropriate alignment guarantees, and (b) the hardware >> can support untorn write guarantees of the requested size. >> >> And for some file systems, and for block devices, you might not need >> to set the first untorn write granularity size at all. For example, >> if the block device represents the entire disk, or represents a >> partition which is aligned on a 1MB boundary (which tends to be case >> for GPT partitions IIRC), then we don't need to set any kind of magic >> persistent granularity size, because it's a fundamental propert of the >> partition. As another example, ext4 has the bigalloc file system >> feature, which allows you to set at file system creation time, a >> cluster allocation size which is a power of two multiple of the >> blocksize. So for example, if you have a block size of 4k, and >> block/cluster ratio is 16, then the cluster size is 64k, and all data >> blocks will be done in aligned 64k chunks. >> >> The ext4 bigalloc feature has been around since 2011, so it's >> something that can be enabled even for a really ancient distro kernel. >> 🙂 Hence, we don't actually*need* any file system format changes. > > That's what I thought, until this following proposal: > https://lore.kernel.org/linux-ext4/cover.1701339358.git.ojaswin@linux.ibm.com/ > So there are two ways, ext4 could achieve aligned block allocation requirements which is required to gurantee atomic writes. 1. Format a filesystem with bigalloc which ensures allocation happen in units of clusters or format it with -b on a higher pagesize system. 2. Add intelligence in multi-block allocator of ext4 to provide aligned allocations (this option won't require any formatting) The patch series you pointed is an initial RFC for doing option 2, i.e. adding allocator changes to provide aligned allocations. But I agree none of that should require any on disk fs layout changes. Currently we are looking into utilizing option-1 which should be relatively easier to do it than option-2, more so when the interfaces for doing atomic writes are still getting discussed. -ritesh