From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AD8BCDB474 for ; Tue, 17 Oct 2023 19:24:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 91DD08005F; Tue, 17 Oct 2023 15:24:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8CDB080009; Tue, 17 Oct 2023 15:24:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 795658005F; Tue, 17 Oct 2023 15:24:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 6A3D980009 for ; Tue, 17 Oct 2023 15:24:33 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 33CA1A0E58 for ; Tue, 17 Oct 2023 19:24:33 +0000 (UTC) X-FDA: 81355930026.22.0CEBFC0 Received: from mail-io1-f45.google.com (mail-io1-f45.google.com [209.85.166.45]) by imf02.hostedemail.com (Postfix) with ESMTP id 6D75680013 for ; Tue, 17 Oct 2023 19:24:29 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="Ei7OX/io"; spf=pass (imf02.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.45 as permitted sender) smtp.mailfrom=nphamcs@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697570669; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=x45oqYNDm1uCzNJ1wOxn/rjABjEKd1PH/2crQaHh9qE=; b=WHMFJxXTkPbTgVjMOmAf52JVC4j7RgftmDsOCJ0IlThfHidKACToq48JGB5Avz3SveXCcB /BWgvk3PlSpr6Ov7vZBTHpNb96FPd2J3QxR92V6eggXcAW5o2b23uaig+fE6FEZ0F2WClZ mz1cO1usXGuZMRGlfT68aB4lm1qZTOk= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="Ei7OX/io"; spf=pass (imf02.hostedemail.com: domain of nphamcs@gmail.com designates 209.85.166.45 as permitted sender) smtp.mailfrom=nphamcs@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697570669; a=rsa-sha256; cv=none; b=0TtJuKYfqbAv/0MSUNNQpvr067clYYx6aQDH5VN9uZLIVdejYCA678d3mwG1RtdWwno2kW mfMDTi3LyH+hX+h4XfHAYAMmAElQOUw5La8i8Zt2hFgcfrSTNbJ3h7TYQY0chMBN8WYCs7 Azx7uIySLjpr1TVTdoPI5P5XadEVtwE= Received: by mail-io1-f45.google.com with SMTP id ca18e2360f4ac-79fd60f40ebso148263639f.1 for ; Tue, 17 Oct 2023 12:24:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697570668; x=1698175468; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=x45oqYNDm1uCzNJ1wOxn/rjABjEKd1PH/2crQaHh9qE=; b=Ei7OX/ioy5jTKsWGjjaDbsyNQ2tE8k7zf+m4HFClNmNc3J4Vkeri+XDA7DW24DYfgr KKhbkgyDL/cZLo10bvenZ/5BemC30GAhkcHzzeAVm8Guo7zm3B/6dyK/m4piFDfBg4uA OpAKyuKj9H84h/lZV81r+PckPjWMK2Ehg2Y7ehI9IMoAisHXCRVyhfs4rkmytQf5CUXB QMNtVY4ziwwkjCwDNJVudhufNPMj12t+uzUuwesdgBVH4jIfC6sWRHxI81v2wOIQDHTo H6p4RSLWw6Lw7vIUcZ++DmFMefs8aTrNQG/VYfDUnTVIJ5JKMxtZG2nu+wsyhCOlARhn qL7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697570668; x=1698175468; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=x45oqYNDm1uCzNJ1wOxn/rjABjEKd1PH/2crQaHh9qE=; b=sWgO1p3b0XhhlgMB63wYUeJZZYKaANkApFIfU3+JgiQvoJqBPe2XyDknhmG2Hou/0g DdZkJByqQIHQPAQuv6epjgVwRIe0VsbRmTlu5TYt2MnIvGituAqiJVNZbxfZlhJKJHUO chqFXCOy8pcAh6FJoqxwgc2NB6Yz+ZCmGS29FSr1JPUn9TpiPnuDOm2TSSigPPejCyPQ Zjuf53XShEfC8JUIeahgUnCs4Jl3qQ/cg35sumE6RgBc86oPpo6fboSMS1Di48KynoAC HXZFHHPcwYsi0e2Se0MwSeFV6tQZ7De6gCS1yMemprAwUKHn+jG3323np+yxPt6lnI93 UOtA== X-Gm-Message-State: AOJu0YzQdaecbOy3C7mk99M4MNf1EG5UEDF8BGPYdIW+OQ3U6XyKxejo qw8a4yuA/x2xccJd7B0RdWevZF8UTsX9p1tM8+A= X-Google-Smtp-Source: AGHT+IHdNLCf+wERxrxP9ggtVWpue731CkTCZEIU+x0OhgpxDpe96F5xRSH+r0svT8Rd5my2Hu3tbGELWpI+5r8m2zw= X-Received: by 2002:a05:6602:1651:b0:79d:27ef:23c3 with SMTP id y17-20020a056602165100b0079d27ef23c3mr2825049iow.5.1697570668445; Tue, 17 Oct 2023 12:24:28 -0700 (PDT) MIME-Version: 1.0 References: <20231017003519.1426574-1-nphamcs@gmail.com> <20231017044745.GC1042487@cmpxchg.org> In-Reply-To: <20231017044745.GC1042487@cmpxchg.org> From: Nhat Pham Date: Tue, 17 Oct 2023 12:24:17 -0700 Message-ID: Subject: Re: [PATCH 0/2] minimize swapping on zswap store failure To: Johannes Weiner Cc: Yosry Ahmed , akpm@linux-foundation.org, cerasuolodomenico@gmail.com, sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com, hughd@google.com, corbet@lwn.net, konrad.wilk@oracle.com, senozhatsky@chromium.org, rppt@kernel.org, linux-mm@kvack.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, david@ixit.cz Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 6D75680013 X-Rspam-User: X-Stat-Signature: 4bs5x6t1nbr114q8umzadx5ihchwxnpt X-Rspamd-Server: rspam01 X-HE-Tag: 1697570669-58001 X-HE-Meta: U2FsdGVkX18Z9As1xPuEA+CNX/OBPAx1JrOZcawRfpTtBb4L2xbFrYR2dv3ee/RM52zs272JXsV0WB8DKPirojFSWORld4qyVjXJxQ6oPsuTGoF7e/iFvlbRCnongFALVTi+esZfCTVWzfIXNVBFj3aD8XxhoVKJXOUsi+lC8rILwF8akee+5cOsdvELqsi7jKaLfCmyDeaMUdo5b+7va3+Zr+Au2KsZP2I0fZfNHs/x5QiXu4D9LMIK8N9Say/jZy2EzXQ5xyF0/kxzqQDmGVDhkVvtWCoKQa6+GWFr3cd5kgao+pM6G2chWOIpv7XgPtylUsiJvLMyuwapXD1JCsSKhRVg9jtsvi1YWgaELXmQHMQIjAa24Bp6mDU8dZaPZ4xq40x0nv6vkClzCbE04s50tmiRhJmqqN9/7dFSgoCNomr5B/TIx32peFQrjThpN5MEVS3QRL6lXC83QCOpOEaKwXS2JyWJVCV9qECmNK16XmxclhHj+nZmlSDQvI/+J30rDeb8/vdKiQiXMfQWU6L4yE0pI0o1LKJjP6U/bKdw1Ez+W3paiXDRdbPg06Au4WE6gaTvylm63dSztUVdKfn7uN0Tgt910gNyoc7AMvqi6jK/WHcvnMo2iEmyWh7y7QK2XDA/ujThjSmAWZ2aXnOrndiexE73Jv12q2ugFCQTa8o0Bxtci+Xz4DOgCZaroffoAvfy0ngzTG5P/OCikmEagJAuh2636wHWNZaD300GPIGiW4wX+nAdSUniI3OzwuUqqeo6a52pFJINVF3YZsIe84YIR82SC83JOl4wTjX3u9Li3H8qTqXBkB7JHmsQhFvaBBzeDrRrDS0lXJoKzS/063R/B48M8U3sRgGhbFjoQkQL3/voEq5UO87+jV5uELszuzSGaBAnyy2Y1K2EqxK0XKt3EGvESJWJ7hsR2CzjzfH2TCBYqVXP5s+gGoZeLzJzZGbsoOjHMM2CRDO onWjE+Ns MDVpn5u5uBPDr1X0Mu3H0JwJ+x7HSBJ86Gceni9fI5c4+EYTdpTaYCoMX2qtPV073vsHvppC+3HFkSs5GscLlMSPKmSCvziSrt3MesqgU2v2yOSUrR1CV+kBQkJo6pxHDfGVxcCPUjxyP2bKtKhLjqYG4VzjzLdlWtV+T6CCWmp8j8s2hv/AOZO6VunxGnj+t9TsclRyfJ00/O3npEIMH2NLhxZ8OB3BsgIWex3jJN+2IclmxOZwaXs9h+EVNobSq6/nO429YZktz2RFJDVdoN0nns2bj4/xdU8v43/qL4Jl7rtsrwH9lx67PXWIU9scO0wDeacLmV10ALHb1+13BnI/itIMRhPdVMMjaoHnmVLT5xqzcAxS2GZg6hd1XVWSvgQY047Ks0/1NSB+eTWi7Hg3u7u0hg5eeaJTE0afBTHI9APfzmT8LhnCrBA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.001461, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Oct 16, 2023 at 9:47=E2=80=AFPM Johannes Weiner wrote: > > On Mon, Oct 16, 2023 at 05:57:31PM -0700, Yosry Ahmed wrote: > > On Mon, Oct 16, 2023 at 5:35=E2=80=AFPM Nhat Pham w= rote: > > > > > > Currently, when a zswap store attempt fails, the page is immediately > > > swapped out. This could happen for a variety of reasons. For instance= , > > > the compression algorithm could fail (such as when the data is not > > > compressible), or the backend allocator might not be able to find a > > > suitable slot for the compressed page. If these pages are needed > > > later on, users will incur IOs from swapins. > > > > > > This issue prevents the adoption of zswap for potential users who > > > cannot tolerate the latency associated with swapping. In many cases, > > > these IOs are avoidable if we just keep in memory the pages that zswa= p > > > fail to store. > > > > > > This patch series add two new features for zswap that will alleviate > > > the risk of swapping: > > > > > > a) When a store attempt fail, keep the page untouched in memory > > > instead of swapping it out. > > > > What about writeback when the zswap limit is hit? I understand the > > problem, but I am wondering if this is the correct way of fixing it. > > We really need to make zswap work without a backing swapfile, which I > > think is the correct way to fix all these problems. I was working on > > that, but unfortunately I had to pivot to something else before I had > > something that was working. > > > > At Google, we have "ghost" swapfiles that we use just to use zswap > > without a swapfile. They are sparse files, and we have internal kernel > > patches to flag them and never try to actually write to them. > > > > I am not sure how many bandaids we can afford before doing the right > > thing. I understand it's a much larger surgery, perhaps there is a way > > to get a short-term fix that is also a step towards the final state we > > want to reach instead? > > I agree it should also stop writeback due to the limit. > > Note that a knob like this is still useful even when zswap space is > decoupled from disk swap slots. We still are using disk swap broadly > in the fleet as well, so a static ghost file setup wouldn't be a good > solution for us. Swapoff with common swapfile sizes is often not an > option during runtime, due to how slow it is, and the destabilizing > effect it can have on the system unless that's basically completely > idle. As such, we expect to continue deploying swap files for physical > use, and switch the zswap-is-terminal knob depending on the workload. > > The other aspect to this is that workloads that do not want the > swapout/swapin overhead for themselves are usually co-located with > system management software, and/or can share the host with less > latency sensitive workloads, that should continue to use disk swap. > > Due to the latter case, I wonder if a global knob is actually > enough. More likely we'd need per-cgroup control over this. > > [ It's at this point where the historic coupling of zswap to disk > space is especially unfortunate. Because of it, zswap usage counts > toward the memory.swap allowance. If these were separate, we could > have easily set memory.zswap.max=3Dmax, memory.swap.max=3D0 to achieve > the desired effect. > > Alas, that ship has sailed. Even after a decoupling down the line it > would be difficult to change established memory.swap semantics. ] > > So I obviously agree that we still need to invest in decoupling zswap > space from physical disk slots. It's insanely wasteful, especially > with larger memory capacities. But while it would be a fantastic > optimization, I don't see how it would be an automatic solution to the > problem that inspired this proposal. > > We still need some way to reasonably express desired workload policy > in a setup that supports multiple, simultaneous modes of operation. > > > > b) If the store attempt fails at the compression step, allow the page > > > to be stored in its uncompressed form in the zswap pool. This maintai= ns > > > the LRU ordering of pages, which will be helpful for accurate > > > memory reclaim (zswap writeback in particular). > > > > This is dangerous. Johannes and I discussed this before. This means > > that reclaim can end up allocating more memory instead of freeing. > > Allocations made in the reclaim path are made under the assumption > > that we will eventually free memory. In this case, we won't. In the > > worst case scenario, reclaim can leave the system/memcg in a worse > > state than before it started. > > Yeah, this is a concern. It's not such a big deal if it's only a few > pages, and we're still shrinking the footprint on aggregate. But it's > conceivable this can happen systematically with some datasets, in > which case reclaim will drive up the memory consumption and cause > OOMs, or potentially deplete the reserves with PF_MEMALLOC and cause > memory deadlocks. > > This isn't something we can reasonably accept as worst-case behavior. > > > Perhaps there is a way we can do this without allocating a zswap entry? > > > > I thought before about having a special list_head that allows us to > > use the lower bits of the pointers as markers, similar to the xarray. > > The markers can be used to place different objects on the same list. > > We can have a list that is a mixture of struct page and struct > > zswap_entry. I never pursued this idea, and I am sure someone will > > scream at me for suggesting it. Maybe there is a less convoluted way > > to keep the LRU ordering intact without allocating memory on the > > reclaim path. > > That should work. Once zswap has exclusive control over the page, it > is free to muck with its lru linkage. A lower bit tag on the next or > prev pointer should suffice to distinguish between struct page and > struct zswap_entry when pulling stuff from the list. Hmm I'm a bit iffy about pointers bit hacking, but I guess that's the least involved way to store the uncompressed page in the zswap LRU without allocating a zswap_entry for it. Lemme give this a try. If it looks decently clean I'll send it out :) > > We'd also have to teach vmscan.c to hand off the page. It currently > expects that it either frees the page back to the allocator, or puts > it back on the LRU. We'd need a compromise where it continues to tear > down the page and remove the mapping, but then leaves it to zswap. > > Neither of those sound impossible. But since it's a bigger > complication than this proposal, it probably needs a new cost/benefit > analysis, with potentially more data on the problem of LRU inversions. > > Thanks for your insightful feedback, Yosry.