From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BB15C27C55 for ; Mon, 10 Jun 2024 17:42:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B3FC16B008A; Mon, 10 Jun 2024 13:42:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AEEF36B0099; Mon, 10 Jun 2024 13:42:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B8566B009C; Mon, 10 Jun 2024 13:42:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 7F48C6B008A for ; Mon, 10 Jun 2024 13:42:48 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0BC1380FED for ; Mon, 10 Jun 2024 17:42:48 +0000 (UTC) X-FDA: 82215699216.27.C4ED1D3 Received: from mail-ej1-f47.google.com (mail-ej1-f47.google.com [209.85.218.47]) by imf13.hostedemail.com (Postfix) with ESMTP id 2F20420016 for ; Mon, 10 Jun 2024 17:42:45 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=gmXdoO8n; spf=pass (imf13.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.47 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718041366; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mPVtbHVilMLAau0cNSGD+gf6ZMhtpojpd/Id7uBZgUI=; b=VzFo3Fgl1U/sXsct/UFvZBS2oL8headR3c7hL/g7vkJDnbswWq1hd37icJ/U+MwEYXTrIZ jU0aQWJYwlQAaVajqOZmjjJ9uCF0tYkyUSxiWBHWTlX7k1DcqpHhwxGjd3ZwjXMwROlqek 6M28nZEEwBy3Vf+9u70MDbassTmBlh0= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=gmXdoO8n; spf=pass (imf13.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.47 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718041366; a=rsa-sha256; cv=none; b=5c8suQ9Tyjv7vClh0/HplB4U9dGeXSwLkeL6HEkSsblH55hvLFUldcLqZ2nttSgaozwkLD PvGLGQRi3YKg5UQY78PWCoEwCSdAKHv0iOwaoiQEW+qEyGo7VCnXx30P+KeHkhHM4BRIqJ Z1ssRAH+uhZq7gohA3ue6Biwh6c6218= Received: by mail-ej1-f47.google.com with SMTP id a640c23a62f3a-a6ef8e62935so20328666b.3 for ; Mon, 10 Jun 2024 10:42:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1718041364; x=1718646164; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=mPVtbHVilMLAau0cNSGD+gf6ZMhtpojpd/Id7uBZgUI=; b=gmXdoO8nOcLBAiNTtUteoBzeeNH28ZQeBl3sEd4iEEfWkKAv/zqmeTcrYCEA9Qw8+I lQfNnxPZ4luiKAVw7YwCy2OBwMESC2LhP0aDxUc1tnN5m/gzJvlMhFRbKprgGFnDheFV k574ygkzLNG6/cSGkzYS9XQlPuxChtNVF4BKY8lGye/KV35EofSccool8uPl+JEz4F2x sruqDyIvancHjkxpFabiYWxIhczJP6dijIE1l/h/huv4Ydh2GJbORdMVuzRVX2zRL0pE 7QXat71BGXPrS00e8eXiDcWY7pHE2bEw/HNsR8S3+S7TUXnUtdfqhNbEOBtc8f4vQlbn sLdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718041364; x=1718646164; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mPVtbHVilMLAau0cNSGD+gf6ZMhtpojpd/Id7uBZgUI=; b=VnhdxfE1/4jb28SOZf/fsCM9OOzcZL7w/U9aXjq8P5mcTPB4l3oywX6RLCQB9x6WU0 KlNmCGs2HJljo2Ku5TEOxMD8kD6JgNvVUjV9T3z8iGFxp3apTubCtgdpJyMgY2oo/TyP /gLAtk6F+iPis21lhd3MPB3aOt4doHmTQklcAzcoopOLU/DC7bWmRFF7jeJugmVC2tK2 Mgr3fBbTjIEBwmuhhEcBIqxOQ9pySMALHawesXNnGLg1uG39aydc10JRN8pEwYbvk45q Rt1FM2/4xGMKqTe9ZHK7XH11WxKNRslRB6t995xCReP1rnsCdLcWNpwYkdpL78Njjt6P zMyA== X-Forwarded-Encrypted: i=1; AJvYcCU7R1Q5kG0WgspScUrSs0MUShBIZWTWCf4VYA8FaUwzOw7hVVGfVinNKIegpRcw/a5UBE2ZMKUW8pkrZbfRSBhQ34o= X-Gm-Message-State: AOJu0Yw15rpkCvVx+SB8sDMkB4SA4+2CpVAkNhwgiIogJ6/6OHu4zpNV KrhiRlqaRYhqhpsj/D+LwOuEHoWdGmOYbpzSMR9zmlfZOFoNgj+XGuRVjYts+eJ1jzLriB0fc8N Tr/Gq9jx4Wm2QI8M9LFW04n/88bSioJO0h4Bu X-Google-Smtp-Source: AGHT+IG6K1bGK51cmUt+3hrWMx6GldnLsLDe7xgAvOxfnkcoy5CHJA8UsmQAcM9wSa7WBsw41ULyOob4eHhEvOxcypo= X-Received: by 2002:a17:906:3da2:b0:a6a:ab5:6f2a with SMTP id a640c23a62f3a-a6cd560fe5emr833555666b.11.1718041364229; Mon, 10 Jun 2024 10:42:44 -0700 (PDT) MIME-Version: 1.0 References: <20240608023654.3513385-1-yosryahmed@google.com> In-Reply-To: From: Yosry Ahmed Date: Mon, 10 Jun 2024 10:42:05 -0700 Message-ID: Subject: Re: [PATCH v2] mm: zswap: handle incorrect attempts to load of large folios To: Barry Song <21cnbao@gmail.com> Cc: Andrew Morton , Johannes Weiner , Nhat Pham , Chengming Zhou , Baolin Wang , Chris Li , Ryan Roberts , David Hildenbrand , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 2F20420016 X-Stat-Signature: og6pee7xnwrndfreuztc4khj19sxqm6j X-HE-Tag: 1718041365-376971 X-HE-Meta: U2FsdGVkX18cW7SFz6PcsL1gOr3AdJ3nFp1dXhkMQLHWJ0hpm7UpLpuboCu4o5/aywVqHoWi4qu7wVaatyiZPOvVTeayAu3DBjJZlR30QOxcpt6gAjhS3S8YCc5ZEv2/iVfhItgsseVtQ/jnCIl18qZY9KXc2FFwUxfbhkd9jad2XvK+9MStxx1kBOksMr7Zb6Wnmh1jwy6vPX3s6DC8PlWrzGrTbBL9ipcHE6pyjPdKKtSqPZtxDQIZ840/VFaY91B+3a5zIga4mwwuxb7JHKuP4Ng5RRhSGtRWPTOno4v4kXRJsQbTKTOb8Y5s6cKtf4teg4bkWm2w57rulNWELu5XnDaNJ9BqZVOA5htJKdi/KC/4CXBum+NMIKEVieYPAsbuOxTMG+q+fjzZ6OACruGspeZz1WWXpZBtsc00ujAahTm+OjjP/a2+aC/VdNaUsj47daVVA8xGNHzzMfuI+iKQg0rOLxgPPWqkLYm3N+cJg5WukZB7pwP/v5sJzccFBOUoMr54FvhVOP79Yv+SxoFHl1MLhh3UeM01Qva5GHDMjYvTeNWUYL5Ss539m7X000ARmnpDZdrt2Gef6o9bhcyztMgWh6dBZjRz/NDh4Kg0JsACHVbQsy54DGelOzx7LDFCGJjFGJGvo5EOV8KweRkUeBK3JIf8MWX6ovqS9CMWFXa/9i3KLcjMNk4viXyx9G/jScd+nOOMY0i0ioHe0Qt+R9bsJBRK7VjMA9xbgelIKNL00VbMPJ0/+AxrMAw/GqIVsSO++LZb+MiAO0rdAGKz6d8opCtA90qj6Nkuuc2kBYBj9WwdSWvZxZwcKPNl7YXcwx+XrlEOu4lg0KLajqLbDAEIK7qVWyswIymV3akVWCLq/nM9WCMxomZ57IaPf4+qYlKDDUcL3kBd8qfAW8baEnJTUlKm/wuwD3z94g4wLzD7DZNYAlW8XsQy7FQDjO5JyS/jaHtrnfu81Dt i/86wtfW /vIBPfpFdjPaMTwuJBmpc6Dj+1c1505zainSRwfvnYO8jFrGlftevn9ekWKATThzZCBR/eeTnjoc6hr6Efqk2nUXQv5BtCxDsH8X5msYSE6auuuGu9K6GZ5un7o0tMQRE/kuRH+B8rdCk90CYYd+a4vR54lIPxF3df9NN7qL6k62nQo3i+ScqW9bJs0zSUX8EOV/pJSJ3c8QqDK/KRAaopDcJ4O8ZX2voydca0QuXi5Fm4lLHTRD9L59PT/sYbPE2yJY1d8pPJdGivWHkjvXxzooWez4dDOCxgtluoYNVrRUuKSHhZCwIajHfSu6Rr9esMMSaEf+0tLmKZPk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000186, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jun 7, 2024 at 9:13=E2=80=AFPM Barry Song <21cnbao@gmail.com> wrote= : > > On Sat, Jun 8, 2024 at 10:37=E2=80=AFAM Yosry Ahmed wrote: > > > > Zswap does not support storing or loading large folios. Until proper > > support is added, attempts to load large folios from zswap are a bug. > > > > For example, if a swapin fault observes that contiguous PTEs are > > pointing to contiguous swap entries and tries to swap them in as a larg= e > > folio, swap_read_folio() will pass in a large folio to zswap_load(), bu= t > > zswap_load() will only effectively load the first page in the folio. If > > the first page is not in zswap, the folio will be read from disk, even > > though other pages may be in zswap. > > > > In both cases, this will lead to silent data corruption. Proper support > > needs to be added before large folio swapins and zswap can work > > together. > > > > Looking at callers of swap_read_folio(), it seems like they are either > > allocated from __read_swap_cache_async() or do_swap_page() in the > > SWP_SYNCHRONOUS_IO path. Both of which allocate order-0 folios, so > > everything is fine for now. > > > > However, there is ongoing work to add to support large folio swapins > > [1]. To make sure new development does not break zswap (or get broken b= y > > zswap), add minimal handling of incorrect loads of large folios to > > zswap. > > > > First, move the call folio_mark_uptodate() inside zswap_load(). > > > > If a large folio load is attempted, and any page in that folio is in > > zswap, return 'true' without calling folio_mark_uptodate(). This will > > prevent the folio from being read from disk, and will emit an IO error > > because the folio is not uptodate (e.g. do_swap_fault() will return > > VM_FAULT_SIGBUS). It may not be reliable recovery in all cases, but it > > is better than nothing. > > > > This was tested by hacking the allocation in __read_swap_cache_async() > > to use order 2 and __GFP_COMP. > > > > In the future, to handle this correctly, the swapin code should: > > (a) Fallback to order-0 swapins if zswap was ever used on the machine, > > because compressed pages remain in zswap after it is disabled. > > (b) Add proper support to swapin large folios from zswap (fully or > > partially). > > > > Probably start with (a) then followup with (b). > > > > [1]https://lore.kernel.org/linux-mm/20240304081348.197341-6-21cnbao@gma= il.com/ > > > > Signed-off-by: Yosry Ahmed > > --- > > > > v1: https://lore.kernel.org/lkml/20240606184818.1566920-1-yosryahmed@go= ogle.com/ > > > > v1 -> v2: > > - Instead of using VM_BUG_ON() use WARN_ON_ONCE() and add some recovery > > handling (David Hildenbrand). > > > > --- > > mm/page_io.c | 1 - > > mm/zswap.c | 22 +++++++++++++++++++++- > > 2 files changed, 21 insertions(+), 2 deletions(-) > > > > diff --git a/mm/page_io.c b/mm/page_io.c > > index f1a9cfab6e748..8f441dd8e109f 100644 > > --- a/mm/page_io.c > > +++ b/mm/page_io.c > > @@ -517,7 +517,6 @@ void swap_read_folio(struct folio *folio, struct sw= ap_iocb **plug) > > delayacct_swapin_start(); > > > > if (zswap_load(folio)) { > > - folio_mark_uptodate(folio); > > folio_unlock(folio); > > } else if (data_race(sis->flags & SWP_FS_OPS)) { > > swap_read_folio_fs(folio, plug); > > diff --git a/mm/zswap.c b/mm/zswap.c > > index b9b35ef86d9be..ebb878d3e7865 100644 > > --- a/mm/zswap.c > > +++ b/mm/zswap.c > > @@ -1557,6 +1557,26 @@ bool zswap_load(struct folio *folio) > > > > VM_WARN_ON_ONCE(!folio_test_locked(folio)); > > > > + /* > > + * Large folios should not be swapped in while zswap is being u= sed, as > > + * they are not properly handled. Zswap does not properly load = large > > + * folios, and a large folio may only be partially in zswap. > > + * > > + * If any of the subpages are in zswap, reading from disk would= result > > + * in data corruption, so return true without marking the folio= uptodate > > + * so that an IO error is emitted (e.g. do_swap_page() will sig= fault). > > + * > > + * Otherwise, return false and read the folio from disk. > > + */ > > + if (folio_test_large(folio)) { > > + if (xa_find(tree, &offset, > > + offset + folio_nr_pages(folio) - 1, XA_PRES= ENT)) { > > + WARN_ON_ONCE(1); > > + return true; > > + } > > + return false; > > IMHO, this appears to be over-designed. Personally, I would opt to > use > > if (folio_test_large(folio)) > return true; I am sure you mean "return false" here. Always returning true means we will never read a large folio from either zswap or disk, whether it's in zswap or not. Basically guaranteeing corrupting data for large folio swapin, even if zswap is disabled :) > > Before we address large folio support in zswap, it=E2=80=99s essential > not to let them coexist. Expecting valid data by lunchtime is > not advisable. The goal here is to enable development for large folio swapin without breaking zswap or being blocked on adding support in zswap. If we always return false for large folios, as you suggest, then even if the folio is in zswap (or parts of it), we will go read it from disk. This will result in silent data corruption. As you mentioned before, you spent a week debugging problems with your large folio swapin series because of a zswap problem, and even after then, the zswap_is_enabled() check you had is not enough to prevent problems as I mentioned before (if zswap was enabled before). So we need stronger checks to make sure we don't break things when we support large folio swapin. Since we can't just check if zswap is enabled or not, we need to rather check if the folio (or any part of it) is in zswap or not. We can only WARN in that case, but delivering the error to userspace is a couple of extra lines of code (not set uptodate), and will make the problem much easier to notice. I am not sure I understand what you mean. The alternative is to introduce a config option (perhaps internal) for large folio swapin, and make this depend on !CONFIG_ZSWAP, or make zswap refuse to get enabled if large folio swapin is enabled (through config or boot option). This is until proper handling is added, of course. > > > + } > > + > > /* > > * When reading into the swapcache, invalidate our entry. The > > * swapcache can be the authoritative owner of the page and > > @@ -1590,7 +1610,7 @@ bool zswap_load(struct folio *folio) > > zswap_entry_free(entry); > > folio_mark_dirty(folio); > > } > > - > > + folio_mark_uptodate(folio); > > return true; > > } > > > > -- > > 2.45.2.505.gda0bf45e8d-goog > > > > Thanks > Barry