From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D99C3C27C52 for ; Thu, 6 Jun 2024 20:32:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5A0366B00A8; Thu, 6 Jun 2024 16:32:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 550046B00A9; Thu, 6 Jun 2024 16:32:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 43FFF6B00AA; Thu, 6 Jun 2024 16:32:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 21E846B00A8 for ; Thu, 6 Jun 2024 16:32:31 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id DFA4DA34E9 for ; Thu, 6 Jun 2024 20:32:30 +0000 (UTC) X-FDA: 82201611660.17.0AE009C Received: from mail-lj1-f182.google.com (mail-lj1-f182.google.com [209.85.208.182]) by imf27.hostedemail.com (Postfix) with ESMTP id 0B59940019 for ; Thu, 6 Jun 2024 20:32:28 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=vZUv0sTT; spf=pass (imf27.hostedemail.com: domain of yosryahmed@google.com designates 209.85.208.182 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717705949; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5YHCiTEkfpHoydmGC9AHr3146/m0kW0vBON+xrTSWjI=; b=xYp5wMbuT4fxTUtPa7/cLGQ0Ov0fu6ePIF1x6M0HpSX1KWubG99PjhLt5jTHjZtIt7JM0N Z5f3h9QrJr1bn7vRitoSTrzE8sRCkc+77TVFCDqDiSTneR48F16NIzKpPJz2btHbxLPzYT qegdnU8RGfawIskIHli95kLf4p4cZHU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717705949; a=rsa-sha256; cv=none; b=X0KiLqHiiggBSvWGp3opsT4ystI4uqYRd0qpSP+xi2m1uRAmMcdiSaZuAEnxSAXOSc3Xpb mEDPIEZBkhX2Yg1E/qbFZ6cvTotm56fj/+m0Ns5N8pQwJN1HzP9zmVl8rYiyI0Z7bBgCf/ YpnZ1gSxHmWkb9MIdogwP2V40AtLR0o= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=vZUv0sTT; spf=pass (imf27.hostedemail.com: domain of yosryahmed@google.com designates 209.85.208.182 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-lj1-f182.google.com with SMTP id 38308e7fff4ca-2eabd22d441so22032111fa.2 for ; Thu, 06 Jun 2024 13:32:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1717705947; x=1718310747; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=5YHCiTEkfpHoydmGC9AHr3146/m0kW0vBON+xrTSWjI=; b=vZUv0sTTJr25jEBTFUL+GLWaUVrplMlPHmQuzV4ilpeDjf5niVTybbOI+iFk7k/OTV 0+wCN3pmZRX1A07d2u28wQWGR6DM33H58eEqMJPX/2jcj9vw1G0XY6Uj20P/H6KI3JDD iVD/F+r1JKSAVNfrlVgtxBNh30BUNqfbgWk6dqYx2pODg6xrwJXoU42k2ypTkMKgUSLY Gpu83u5DganOGkeZOX9G3yOb4rNRywJhaWYh74CDeIncG986ZkP1c8hqfWucU3PXkeQN j2bzhpZJy8FWDqdXQWAbZyizMBvL+HtZmuMlooYedZY/UxMv+S7DbyHlAk8zFWS2s6aN Ri0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717705947; x=1718310747; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5YHCiTEkfpHoydmGC9AHr3146/m0kW0vBON+xrTSWjI=; b=UicOhXHlPh6hwK8XskLJXHUjJn2MS0TqwrhJ+HjQQ+cQuRQZkaexD+YZA/temkV0B3 bAByy51VeyCX6u4oooPOVGtAgwE0AZ2aFx0320bdmT+eIa0TL22mboXp+t9drNdZe7B7 1iBfc30doIJ7WYuQQp4PTEaGxSeewSZRrwuve5EtiHUdEky5LwevuBhuTptvOx67I5qX PABW/XsbBWI/IEVwLzvPRkyOiruLsBt/DNv+J6Lp1+AEGAGwRUvpHt8ZWUQevdJDSLam l/dG7J4bQc9H5aTSClNbi7jUlLNGYOgCQ0rKEq5JEtLMz/ARRygT/V6tV67+PeCmFEeA wIhg== X-Forwarded-Encrypted: i=1; AJvYcCXbkYY8q7mTf/lZ4NfMSEgY4ao1R7A7BlbHpNBp9xrrUZFmsL9kTS1UbeedcCgtA7JSHEK/pU9ZMD4nMrecKjsbDo8= X-Gm-Message-State: AOJu0Yy0y8K95kuvwM0RgwAf2fiQDEBJ+a+ne45TtpZPQS2SkVQyE1mO vWgWf7trz5RrVfsaB6RRydKXJmuJOhIeajsHvD/iH9j1yyeVjo5wNrnzfY7VBnEooZl0f8YoXPA BU2M2enupFO1kTJfaMH+OAGVkFEzs6DnG5oaj X-Google-Smtp-Source: AGHT+IGjOeKVtKGgR13bQg8qW5Cv42mcnsQtbopPin8/YkTWpckfS9UiNf1pM2EeF2gbFbauRTceMriznw5KI/f+zOc= X-Received: by 2002:a2e:9207:0:b0:2e1:9c57:195a with SMTP id 38308e7fff4ca-2eadce85f26mr5383061fa.32.1717705947131; Thu, 06 Jun 2024 13:32:27 -0700 (PDT) MIME-Version: 1.0 References: <20240606184818.1566920-1-yosryahmed@google.com> <84d78362-e75c-40c8-b6c2-56d5d5292aa7@redhat.com> In-Reply-To: <84d78362-e75c-40c8-b6c2-56d5d5292aa7@redhat.com> From: Yosry Ahmed Date: Thu, 6 Jun 2024 13:31:48 -0700 Message-ID: Subject: Re: [PATCH] mm: zswap: add VM_BUG_ON() if large folio swapin is attempted To: David Hildenbrand Cc: Andrew Morton , Johannes Weiner , Nhat Pham , Chengming Zhou , Baolin Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Ryan Roberts , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Stat-Signature: 15op35mz8pay1mp81ourc1dqpctet6qo X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 0B59940019 X-HE-Tag: 1717705948-329501 X-HE-Meta: U2FsdGVkX18a+kwkUilyjSYLCXUOb0bQzyRUx5a46iGDAr0Qek48XkHm9LtC2Y/7ng3HTlOQHSNpLNGR7swqjHrMGNSud0H5Rvg4eAe8Cf7LNhZ9BshTEYA1XKX0oVLurScbCNfwohwLbkyZgUn5DQGhZOIDQ1OUcxnLuhaZ2xr2P5kKg6JLJk9hSw8jvzD6q2YQ5iCmmx318HmK2m03lX8D7J0Zkt1N6JSmu/WnBY3qygub8c63A2z9aTzKFEfgTmK5ypSyT+EOBsnsXo5ntvijAc9Qnu2K2TQIYzIu8E8ixE3yU1CWTma434C/tSTHRhgdsfegnbjJ76cn0WziFhnp7vki+AL7R49Pm46l0xlCHDNAl/+iWMILV33vdA6dI2lN0QY+NOve0NhX5Khc/hj4U7dK8pT0b6F0jfwLxsyIhBaA5ontepPpK7dAqveGsaHOYtdjEu6MSy1cz3DZDzJoUndWPqtwCCl2DSc8FAEEp+aaP5vvNk1/nvVP4C2UL45IvtX1xHbgYG35YbMsskkZSjq2LSG4MQJjcR1sc/sVfY8tU9HKyB3X2MJx0ODPotwkwethi+/EaTglsvABXdIcxdYcpQXAraUwg4rl8z5xKRR1khF+rxtROGDasc8P3KqWnhu4tFjCem14m/KDlW4HdmiYYqmG+iYBfnIWTS/52PuMahGo0i3SuK0f6eKIuAjaFerywrO3i9rHVUPMfttJbruZ60pQ1cpPsPPSUk2DQL5ZO9yUyJ+ys/Z22PP8M+a4BDQMgJuqyo5nJ1vGl6UapIeCoxJB/g8CseUL8QZ/YcEgH2JlJxoghB3huKx7iIl8YKMjsAfHKWSA4w34CRNAJkVy3aemkLpO5XDel3yXwdHrpbx0ANXBVSuWyXz6rMEskYrHhY9KSu7X6n97IKuMB62RvaGrRx/dOwmqiRQvie/7ov7WrdeTGoSuklIu/uOsSRVTcnL5VAdtR+H kxi0B2Km y7WUXKnoDUe39pRlbDpTr+iiId4CfEPk0ySHTs68sJywFJKGDIdW0lVXEsPQ6c5ZmRvV10SqOlYxU9aVxJD3vD+vt/aOKo4GInRZDgfjbRDX//eg8jIjNCUwxNefPIZgc4RM96ljNzEnLvh8BRk8QMN2hZIHUXokGV9vEKQ+Iv+Hl7+mk06jsWT68BOqWwuiVoi1ZTz1si2pdTtR/WXuNHuBrc2fPWl9zGjPEpv6Os5GS/MnUb54cqoYl79cQWiH+zRMAkSC36/iEsKSfPj5czqXjO99XtOS4KTe6pfC/Z/KYvOfxdAZtqWaNpjO2gs3MKQQ0/4vnHWfzhco= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jun 6, 2024 at 1:22=E2=80=AFPM David Hildenbrand = wrote: > > On 06.06.24 20:48, Yosry Ahmed wrote: > > With ongoing work to support large folio swapin, it is important to mak= e > > sure we do not pass large folios to zswap_load() without implementing > > proper support. > > > > For example, if a swapin fault observes that contiguous PTEs are > > pointing to contiguous swap entries and tries to swap them in as a larg= e > > folio, swap_read_folio() will pass in a large folio to zswap_load(), bu= t > > zswap_load() will only effectively load the first page in the folio. If > > the first page is not in zswap, the folio will be read from disk, even > > though other pages may be in zswap. > > > > In both cases, this will lead to silent data corruption. > > > > Proper large folio swapin support needs to go into zswap before zswap > > can be enabled in a system that supports large folio swapin. > > > > Looking at callers of swap_read_folio(), it seems like they are either > > allocated from __read_swap_cache_async() or do_swap_page() in the > > SWP_SYNCHRONOUS_IO path. Both of which allocate order-0 folios, so we > > are fine for now. > > > > Add a VM_BUG_ON() in zswap_load() to make sure that we detect changes i= n > > the order of those allocations without proper handling of zswap. > > > > Alternatively, swap_read_folio() (or its callers) can be updated to hav= e > > a fallback mechanism that splits large folios or reads subpages > > separately. Similar logic may be needed anyway in case part of a large > > folio is already in the swapcache and the rest of it is swapped out. > > > > Signed-off-by: Yosry Ahmed > > --- > > > > Sorry for the long CC list, I just found myself repeatedly looking at > > new series that add swap support for mTHPs / large folios, making sure > > they do not break with zswap or make incorrect assumptions. This debug > > check should give us some peace of mind. Hopefully this patch will also > > raise awareness among people who are working on this. > > > > --- > > mm/zswap.c | 3 +++ > > 1 file changed, 3 insertions(+) > > > > diff --git a/mm/zswap.c b/mm/zswap.c > > index b9b35ef86d9be..6007252429bb2 100644 > > --- a/mm/zswap.c > > +++ b/mm/zswap.c > > @@ -1577,6 +1577,9 @@ bool zswap_load(struct folio *folio) > > if (!entry) > > return false; > > > > + /* Zswap loads do not handle large folio swapins correctly yet */ > > + VM_BUG_ON(folio_test_large(folio)); > > + > > There is no way we could have a WARN_ON_ONCE() and recover, right? Not without making more fundamental changes to the surrounding swap code. Currently zswap_load() returns either true (folio was loaded from zswap) or false (folio is not in zswap). To handle this correctly zswap_load() would need to tell swap_read_folio() which subpages are in zswap and have been loaded, and then swap_read_folio() would need to read the remaining subpages from disk. This of course assumes that the caller of swap_read_folio() made sure that the entire folio is swapped out and protected against races with other swapins. Also, because swap_read_folio() cannot split the folio itself, other swap_read_folio_*() functions that are called from it should be updated to handle swapping in tail subpages, which may be questionable in its own right. An alternative would be that zswap_load() (or a separate interface) could tell swap_read_folio() that the folio is partially in zswap, then we can just bail and tell the caller that it cannot read the large folio and that it should be split. There may be other options as well, but the bottom line is that it is possible, but probably not something that we want to do right now. A stronger protection method would be to introduce a config option or boot parameter for large folio swapin, and then make CONFIG_ZSWAP depend on it being disabled, or have zswap check it at boot and refuse to be enabled if it is on.