From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2C8FC48BF6 for ; Mon, 26 Feb 2024 21:19:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A30D6B008A; Mon, 26 Feb 2024 16:19:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 62D1C6B0096; Mon, 26 Feb 2024 16:19:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A51F6B0098; Mon, 26 Feb 2024 16:19:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 3809F6B008A for ; Mon, 26 Feb 2024 16:19:26 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 0B5021C0BB4 for ; Mon, 26 Feb 2024 21:19:26 +0000 (UTC) X-FDA: 81835221132.23.586D47A Received: from out-175.mta1.migadu.com (out-175.mta1.migadu.com [95.215.58.175]) by imf24.hostedemail.com (Postfix) with ESMTP id 4DF12180002 for ; Mon, 26 Feb 2024 21:19:24 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=iQ5PbwGp; spf=pass (imf24.hostedemail.com: domain of kent.overstreet@linux.dev designates 95.215.58.175 as permitted sender) smtp.mailfrom=kent.overstreet@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708982364; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ReMmygJwuxWQaBs4diTFEfeTguIkUzvr+cMcMk+rSXI=; b=7w/DDXK1LN4unjZveOjZeoQT4LaQ/nowPI5wXy7Wg5fjjSvvCaw2Q/mvmRznPdOsNHhx+E xgjPmV6EoayBgh+k7G4aZvwv/dMo5RB53IvO7wRI3kHyIOE8m9fFMVRItlwTVyVDKlpXVf xf/wFwl3yiT3xLbVhTwYNENs5u7SUzk= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=iQ5PbwGp; spf=pass (imf24.hostedemail.com: domain of kent.overstreet@linux.dev designates 95.215.58.175 as permitted sender) smtp.mailfrom=kent.overstreet@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708982364; a=rsa-sha256; cv=none; b=0NpoaWve1tvBv58NSED00p4IxokpoU3IC9gu1tTdcR/GnTPjqGdcXf+A7otipblCl5mzzx nDu9UBExa0Wv+DEVDANlniW/FnBchNbbbWs03iO+u2pV684wQpk3Td3GJc/P85WQSmVLEy BoiZLMIbdlAHcpcCUxhCMMTUHxB7/FU= Date: Mon, 26 Feb 2024 16:19:14 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1708982362; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=ReMmygJwuxWQaBs4diTFEfeTguIkUzvr+cMcMk+rSXI=; b=iQ5PbwGpCr1MM+tPehko5rndLZu3lcWfc1vJWxG7gHy0fR9wl5WjlwuH6j8U+AOuH5TUZb sjJlqvcPBIchvFdGM5KpB8xlxuFdrTpwPHAVKwdCVwI8SU8EqnLct7uRT1GWnX8youbKYF x/nDupLvn5nv2pk3nx34B+e9yFlyDxg= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Kent Overstreet To: Matthew Wilcox Cc: Linus Torvalds , Al Viro , Luis Chamberlain , lsf-pc@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-mm , Daniel Gomez , Pankaj Raghav , Jens Axboe , Dave Chinner , Christoph Hellwig , Chris Mason , Johannes Weiner , "Paul E. McKenney" Subject: Re: [LSF/MM/BPF TOPIC] Measuring limits and enhancing buffered IO Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 4DF12180002 X-Rspam-User: X-Stat-Signature: kc1qa3ir69fh795d1mupz8144qtk1sex X-Rspamd-Server: rspam01 X-HE-Tag: 1708982364-396357 X-HE-Meta: U2FsdGVkX19f5pvtuh7Q44updVLlObVCrFk2g/cowjc83TEDohH28ovTh470V/ky93y6O8Cr5+qTHFssMqHbfMWtS0bJqd/CYvZkpCfTKh1IVn54bj6op/i9emCEjFK0jL1h3VRtuH+XLkX9DtmZgt8rEkjEvjsmqLFqbLa8ZfkVEZz4OTzqXZW3xXKjOTCgYI+QzhDCyznw7HintxU69/tKDAfCXFYpgSxwa4J8wm59cvXjnwwPXOw1urqdVVcOo0layKEAO2GTi3Ccjd2gLN3sXovtJPavBTP60nUCpTzYbsNJi9hYVOwbSTZv7mEuhn+SY+6UoCTrTwvpl3Mk5Dl2NouuwDsOnQKIBxMIJtWLAsddj7zKkzlMxXFXxOpp17bPaNOk7MPKfA0DvzKEDMKtrsdLlhYiAaGhNJJ48BVPIZK9KR+ubkz9FvL+szPJq9oqwW60gbSfbEwwB40Tt4ERtStJ4jZiWkeHIkFOL68/bjQnB4DHM7loh0lL6lkDut7CadiOQz4JM2BIXjAqrBEukB3L6qajiy4nPqcKSUpE27oLgYt+qD9GxPf6KeX6gqXOu7WPvbPLeV7lhuwV39tbteoSnBlIedAmsrYeiwlmPauLtjpCtIBEFHuHmHBCc2w3nNIEhUekeGs8jYw92epocIjzZF/VcMqN6lE1URnZIfb8gQqS4p4ETMG/w+AF47Id7nmFRi3dkc6A6Hjb5fwrD7H1LYWxd4bX62d7eZ2b7Mw1Gv7asncKdvLLMvcWXZYVdV8SvWWwlugzSU6n0yQvZwhkVXR4Sq9CvhWfzUw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: +cc Paul On Mon, Feb 26, 2024 at 04:17:19PM -0500, Kent Overstreet wrote: > On Mon, Feb 26, 2024 at 09:07:51PM +0000, Matthew Wilcox wrote: > > On Mon, Feb 26, 2024 at 09:17:33AM -0800, Linus Torvalds wrote: > > > Willy - tangential side note: I looked closer at the issue that you > > > reported (indirectly) with the small reads during heavy write > > > activity. > > > > > > Our _reading_ side is very optimized and has none of the write-side > > > oddities that I can see, and we just have > > > > > > filemap_read -> > > > filemap_get_pages -> > > > filemap_get_read_batch -> > > > folio_try_get_rcu() > > > > > > and there is no page locking or other locking involved (assuming the > > > page is cached and marked uptodate etc, of course). > > > > > > So afaik, it really is just that *one* atomic access (and the matching > > > page ref decrement afterwards). > > > > Yep, that was what the customer reported on their ancient kernel, and > > we at least didn't make that worse ... > > > > > We could easily do all of this without getting any ref to the page at > > > all if we did the page cache release with RCU (and the user copy with > > > "copy_to_user_atomic()"). Honestly, anything else looks like a > > > complete disaster. For tiny reads, a temporary buffer sounds ok, but > > > really *only* for tiny reads where we could have that buffer on the > > > stack. > > > > > > Are tiny reads (handwaving: 100 bytes or less) really worth optimizing > > > for to that degree? > > > > > > In contrast, the RCU-delaying of the page cache might be a good idea > > > in general. We've had other situations where that would have been > > > nice. The main worry would be low-memory situations, I suspect. > > > > > > The "tiny read" optimization smells like a benchmark thing to me. Even > > > with the cacheline possibly bouncing, the system call overhead for > > > tiny reads (particularly with all the mitigations) should be orders of > > > magnitude higher than two atomic accesses. > > > > Ah, good point about the $%^&^*^ mitigations. This was pre mitigations. > > I suspect that this customer would simply disable them; afaik the machine > > is an appliance and one interacts with it purely by sending transactions > > to it (it's not even an SQL system, much less a "run arbitrary javascript" > > kind of system). But that makes it even more special case, inapplicable > > to the majority of workloads and closer to smelling like a benchmark. > > > > I've thought about and rejected RCU delaying of the page cache in the > > past. With the majority of memory in anon memory & file memory, it just > > feels too risky to have so much memory waiting to be reused. We could > > also improve gup-fast if we could rely on RCU freeing of anon memory. > > Not sure what workloads might benefit from that, though. > > RCU allocating and freeing of memory can already be fairly significant > depending on workload, and I'd expect that to grow - we really just need > a way for reclaim to kick RCU when needed (and probably add a percpu > counter for "amount of memory stranded until the next RCU grace > period").