From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A89DC28B30 for ; Fri, 21 Mar 2025 03:05:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C5F2280002; Thu, 20 Mar 2025 23:05:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 37609280001; Thu, 20 Mar 2025 23:05:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23D01280002; Thu, 20 Mar 2025 23:05:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 06D10280001 for ; Thu, 20 Mar 2025 23:05:29 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id E422B1A14FD for ; Fri, 21 Mar 2025 03:05:29 +0000 (UTC) X-FDA: 83244067578.04.AADA34B Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by imf12.hostedemail.com (Postfix) with ESMTP id 4AD3540009 for ; Fri, 21 Mar 2025 03:05:28 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=syXon8Ls; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf12.hostedemail.com: domain of djwong@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=djwong@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1742526328; a=rsa-sha256; cv=none; b=ICemRrxKxC47Gb1IUt5AVWfMgQbVf3YimqfC16tpBFnlRL9J86t6VIEH+1VHYi/bXVVjh7 3PQ2ro7mma/3rXBW49II0idTMALj9KOmPMebRApy2y/P1N93gopHbxQPrCyFIgzjeMgk5w c5Uavt3MNJfokf6+GUSQe2CvcntpvXg= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=syXon8Ls; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf12.hostedemail.com: domain of djwong@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=djwong@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1742526328; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xh80jvGcr3/Jlh4d/qZ+zrwK0GLmYh3UTFpsOrd+m20=; b=fYlGTecsi9Wx2SRiX2R36LJ9bDlRpj/IBhO9kEIRw43urp5J7D/OLwFn8QOmxtAf+SexNO 9Lagv8eDL4yFr0qM+4DkvrID/ZC0OlxLpm2A4UqIgQ/fwH0/gbNVPNmEgStJszIad6kqgk K2vpqErFJRaIYaUskNMM9fVk2F0yfJs= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 8E31444B4D; Fri, 21 Mar 2025 03:05:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C788FC4CEDD; Fri, 21 Mar 2025 03:05:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742526326; bh=eSNizQvwTtixUL1Hx9p4KwZhPdnWQFaJAuDisKJWEDg=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=syXon8Ls2j9bn40xp2X6juSK6lv9GOm/aQxwH/SQoVRyaV7IF3jd6s+dTL3T9lWdU 9xp2GGFQPhMMBNQOYdEc22zmq5Dekq029JeOrwtZmONPE+zdyooS7t6vczvqUif/Ka MkYodVo9Wrr15yflut/Z72lbHyRslr5/M8m6hF2Tk+IvH0gpaf4hHpKnGaRummT2Xu USyvfhyi1pBFW4GzDIRZFM0b4fzDgp98vXKjKOGqA2fERiAqB97HNdy0CezvfuIhOa 4MXVHxkza69miim95JF4illwQX8uti+xpDmObBp6a6Oau7/deBcNMWuRzXxLsbGKi2 e+Z7bfP2wEMoA== Date: Thu, 20 Mar 2025 20:05:26 -0700 From: "Darrick J. Wong" To: Ritesh Harjani Cc: Luis Chamberlain , linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, lsf-pc@lists.linux-foundation.org, david@fromorbit.com, leon@kernel.org, hch@lst.de, kbusch@kernel.org, sagi@grimberg.me, axboe@kernel.dk, joro@8bytes.org, brauner@kernel.org, hare@suse.de, willy@infradead.org, john.g.garry@oracle.com, p.raghav@samsung.com, gost.dev@samsung.com, da.gomez@samsung.com Subject: Re: [LSF/MM/BPF TOPIC] breaking the 512 KiB IO boundary on x86_64 Message-ID: <20250321030526.GW89034@frogsfrogsfrogs> References: <87o6xvsfp7.fsf@gmail.com> <20250320213034.GG2803730@frogsfrogsfrogs> <87jz8jrv0q.fsf@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87jz8jrv0q.fsf@gmail.com> X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 4AD3540009 X-Stat-Signature: 86w3jnp8gjkzo9m7cxf68kt31iwbnc1r X-HE-Tag: 1742526328-64808 X-HE-Meta: U2FsdGVkX1/Sjy9t2XqYBRJ4XjpgJ8ly6oxWYg6LWfy3wr+jqt0yX+CTCmO9f+7K1oQMR3L7aixEGYD/+dwKoH5vwXj+mimHUWpITFHZTWIBaEPC/Wu2CIw1C2jo+WlAbNw93BeP3QXbHgjgu26+ics4CAeVS6RcPp825bnN1eQF/4wDe8fnaOd4C1neMB9q+tVYodaJiG0NuZ5XnWGdrv++3mMFd+ewOmEWJv++wPQT6yNDzKmEs1z0bgqNuiIAFT0sX8oFuw1QbkUVaKaqbds7YsI9tiof2jpBvdCB8Hy55MrTH9FzeSdZ20DsFJ8LGgNzi4FWIm8hhHic4qBXzMBztJxy73dcBEm1CcRsGuF3cLvK6mXYIB1IgrqlzgGjjZIaHtrdwQsh2l6zqbnWc5heLcImlBMijzrhp7ahjqmSSzSKSPvwL/Wmv9tMW5XYxCXE9/itlAILgwY6b79A9U6v2uqrT/39FkMwLa9EoU6jW41V2embRdERUYAq7nUH91/8zxQAhra1AnnAjdLWcmih65xmSHEjnZjrSIcP668V7ypdmHHlad3Cka71E6PW//quN7dSyax31c1pAa9BzqlS16ZPz55zCyMLtqWgvdwZp+SAhD2pxF/nH5rTwfNQnqTCy1dF2xY0wbNo49BVJnRk/zh+CURjsK0DmZOG7RJ5N6J6QTDvqGLDwOHHPrEKMuaqjbD1o4D61jWhTHZqZzNrJ08gZWlp4RU+zOTByjrC8PShAMtJRfFEh2cOZ3KGDPcBReX4CTe9EEOzUKpuEXBgMraRer/+3yFPEJ9rO28WVc+XZm6H6qnmeRyCb8TI2xzDO2HBWt7O124h/oQx4qoGM+6nyqtKn9l3yjWj3OxoVXgt4m4mW/Q4X7UdPc75h+HdzkLlgqiGT118qKykvIm+ZeoQ936ocwOjyGEwBkJgJtyGqrpLpIrpnErIAelzkQlVNOap3PXVft79eO8 MausI9kT IWMw2M29mK24Sm97+rATT9eLrf42qU6QuvTrYq1xZQ7DJKaN2eVB+tCH6nGshhY1UXOzpN1Di+Q0/ULw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Mar 21, 2025 at 07:43:09AM +0530, Ritesh Harjani wrote: > "Darrick J. Wong" writes: > > > On Fri, Mar 21, 2025 at 12:16:28AM +0530, Ritesh Harjani wrote: > >> Luis Chamberlain writes: > >> > >> > We've been constrained to a max single 512 KiB IO for a while now on x86_64. > >> > This is due to the number of DMA segments and the segment size. With LBS the > >> > segments can be much bigger without using huge pages, and so on a 64 KiB > >> > block size filesystem you can now see 2 MiB IOs when using buffered IO. > >> > But direct IO is still crippled, because allocations are from anonymous > >> > memory, and unless you are using mTHP you won't get large folios. mTHP > >> > is also non-deterministic, and so you end up in a worse situation for > >> > direct IO if you want to rely on large folios, as you may *sometimes* > >> > end up with large folios and sometimes you might not. IO patterns can > >> > therefore be erratic. > >> > > >> > As I just posted in a simple RFC [0], I believe the two step DMA API > >> > helps resolve this. Provided we move the block integrity stuff to the > >> > new DMA API as well, the only patches really needed to support larger > >> > IOs for direct IO for NVMe are: > >> > > >> > iomap: use BLK_MAX_BLOCK_SIZE for the iomap zero page > >> > blkdev: lift BLK_MAX_BLOCK_SIZE to page cache limit > >> > >> Maybe some naive questions, however I would like some help from people > >> who could confirm if my understanding here is correct or not. > >> > >> Given that we now support large folios in buffered I/O directly on raw > >> block devices, applications must carefully serialize direct I/O and > >> buffered I/O operations on these devices, right? > >> > >> IIUC. until now, mixing buffered I/O and direct I/O (for doing I/O on > >> /dev/xxx) on separate boundaries (blocksize == pagesize) worked fine, > >> since direct I/O would only invalidate its corresponding page in the > >> page cache. This assumes that both direct I/O and buffered I/O use the > >> same blocksize and pagesize (e.g. both using 4K or both using 64K). > >> However with large folios now introduced in the buffered I/O path for > >> block devices, direct I/O may end up invalidating an entire large folio, > >> which could span across a region where an ongoing direct I/O operation > > > > I don't understand the question. Should this read ^^^ "buffered"? > > oops, yes. > > > As in, directio submits its write bio, meanwhile another thread > > initiates a buffered write nearby, the write gets a 2MB folio, and > > then the post-write invalidation knocks down the entire large folio? > > Even though the two ranges written are (say) 256k apart? > > > > Yes, Darrick. That is my question. > > i.e. w/o large folios in block devices one could do direct-io & > buffered-io in parallel even just next to each other (assuming 4k pagesize). > > |4k-direct-io | 4k-buffered-io | > > > However with large folios now supported in buffered-io path for block > devices, the application cannot submit such direct-io + buffered-io > pattern in parallel. Since direct-io can end up invalidating the folio > spanning over it's 4k range, on which buffered-io is in progress. > > So now applications need to be careful to not submit any direct-io & > buffered-io in parallel with such above patterns on a raw block device, > correct? That is what I would like to confirm. I think that's correct, and kind of horrifying if true. I wonder if ->invalidate_folio might be a reasonable way to clear the uptodate bits on the relevant parts of a large folio without having to split or remove it? --D > > --D > > > >> is taking place. That means, with large folio support in block devices, > >> application developers must now ensure that direct I/O and buffered I/O > >> operations on block devices are properly serialized, correct? > >> > >> I was looking at posix page [1] and I don't think posix standard defines > >> the semantics for operations on block devices. So it is really upto the > >> individual OS implementation, correct? > >> > >> And IIUC, what Linux recommends is to never mix any kind of direct-io > >> and buffered-io when doing I/O on raw block devices, but I cannot find > >> this recommendation in any Documentation? So can someone please point me > >> one where we recommend this? > > And this ^^^ > > > -ritesh > > >> > >> [1]: https://pubs.opengroup.org/onlinepubs/9799919799/ > >> > >> > >> -ritesh > >> > >> > > >> > The other two nvme-pci patches in that series are to just help with > >> > experimentation now and they can be ignored. > >> > > >> > It does beg a few questions: > >> > > >> > - How are we computing the new max single IO anyway? Are we really > >> > bounded only by what devices support? > >> > - Do we believe this is the step in the right direction? > >> > - Is 2 MiB a sensible max block sector size limit for the next few years? > >> > - What other considerations should we have? > >> > - Do we want something more deterministic for large folios for direct IO? > >> > > >> > [0] https://lkml.kernel.org/r/20250320111328.2841690-1-mcgrof@kernel.org > >> > > >> > Luis > >>