From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13371CD4F5B for ; Thu, 5 Sep 2024 07:55:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9ACDF6B02B3; Thu, 5 Sep 2024 03:55:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 90D916B02B5; Thu, 5 Sep 2024 03:55:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 787646B02B4; Thu, 5 Sep 2024 03:55:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 56BAB6B02AE for ; Thu, 5 Sep 2024 03:55:43 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id C31E41A0A03 for ; Thu, 5 Sep 2024 07:55:42 +0000 (UTC) X-FDA: 82529925324.02.BA37674 Received: from mail-ej1-f49.google.com (mail-ej1-f49.google.com [209.85.218.49]) by imf23.hostedemail.com (Postfix) with ESMTP id D0742140004 for ; Thu, 5 Sep 2024 07:55:40 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=YUZyrh8Y; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf23.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.49 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725522916; a=rsa-sha256; cv=none; b=Bu0LrkQj/gv70C/g+/Fk+4c0eu2Fryer3MBFdCw+HmO5FOoDJsFT62w78ZKW49lhS+qifq 3OXfT/kfSC/u1iAK/0xovleUokbv7tNmIQNzBRfcHxRBeTpzzqc4YgXsxqF3rOVvUbISqj Xdb+ZtIamlFCEpRtvPHNLRxCAmrkLIg= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=YUZyrh8Y; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf23.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.49 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725522916; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RQj7w9KgAzV4sviOPUObnL6qYVOdZSO5rlSQykT8m4Y=; b=VU7iPICImn+9UmUZx4xfkmk4KCbKmSsL++t6PaIsLAM/iOvg2Jn97Zo+5A0i/pzvje29Vv heimJEpFyhZKUrb0m+GLoaNIrowU+cxwPVW2qgrkxIATvOP/z43rgBjYQBUIA4IaFfx/Vb gyA2IaIixPQo8i8lzqLE3lDzJE8htes= Received: by mail-ej1-f49.google.com with SMTP id a640c23a62f3a-a7aa086b077so53607166b.0 for ; Thu, 05 Sep 2024 00:55:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725522939; x=1726127739; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=RQj7w9KgAzV4sviOPUObnL6qYVOdZSO5rlSQykT8m4Y=; b=YUZyrh8Yq3KRbjjalZ0uFq69T//+690+4ab1/SkeSqNwaM7CoepMkQmDRP7iVhCfnI PY3LshKgLg8QSHQBLyjUD0ZYauaX7ZldBYaghbXln4QRYS3IdG4XqSdvZhscR0RdCCvh LfSmTdeEFxgY0397PZjqE8lF5cdRSIp7rzGJQ1Wr2ONf8u1W9//Lj95qM+vBSc1rt9qK RzsDaOnuLE2EzCjzm0NcmJFNr5r7qTQ+MjTm9lqnV8xnYUTAvhe97OZ4Fod7s7hhAafW xo8okV1+tpHugWTJSZgoHH1CL6qJZuEnTLc9JUqa5HSToGmSjrCJuinoycXr7pzhHjUp DmJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725522939; x=1726127739; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RQj7w9KgAzV4sviOPUObnL6qYVOdZSO5rlSQykT8m4Y=; b=jehtuBSEJL/ksiNp5ITpEpkXkPg0v0qEA27gcvX0thAilz945Uhdd5PxLZ/x8TOuTX xrrshk3nzLg4/NZzXEyGu7lfXr+9a4TPYW/0ccHM0vBll7pPCyuEawWEBLv+Shj+coKy LNOjR+oZYecoazXCEhGMUIxhMzovxqcit2QCVZLnn07FiPJYZGqvVMHO9Kp4vTQZbBlh 2fAtKVSpqQqTRFy5qeVTy/vaAULZtSxBdgDcpo/iKd7DZzTBDYs5rXJVNpTpnV5liPku e/a0bZrrGdJgR2KtFKxBkQE+zd6vS6xQyaubGcWQw9m8zQZN32TbNl1L2wBwthTHs1Uh nNiA== X-Forwarded-Encrypted: i=1; AJvYcCWmQ9XyJ1rtNI4lNJF1NMXIu4QfggxTGmxrBf2x613XP3kmmvX71ouoWElU614Amqx4eIjaLOE5gg==@kvack.org X-Gm-Message-State: AOJu0Ywg5KeRDsbAujiHxS4qsA3ms5aciglUo8wYHe8cE78Vf70rqpLG WeKIkh86H2J7YVIHlYoJO1S2FuL2/GZDP0zkeafX6Mslcq3dY8cgTzVV19nEMlw72XAyV7zoODx aeuLcqoQjS8opSyB3Ag57XRHDrSUvRkSi5aBs3haXD4hGLS/SgZ2e X-Google-Smtp-Source: AGHT+IEREgWlr9bVpEAUDx0lX3+PNQ9dKVxj29Lr87pLUFQnqELlg2mBrInMe1ZAgel9oznbRWZgz1nuRiGabgBWvPE= X-Received: by 2002:a17:907:efe2:b0:a86:a178:42de with SMTP id a640c23a62f3a-a8a3f532680mr429647066b.54.1725522938767; Thu, 05 Sep 2024 00:55:38 -0700 (PDT) MIME-Version: 1.0 References: <20240612124750.2220726-2-usamaarif642@gmail.com> <20240904055522.2376-1-21cnbao@gmail.com> In-Reply-To: From: Yosry Ahmed Date: Thu, 5 Sep 2024 00:55:02 -0700 Message-ID: Subject: Re: [PATCH v4 1/2] mm: store zero pages to be swapped out in a bitmap To: Barry Song <21cnbao@gmail.com> Cc: usamaarif642@gmail.com, akpm@linux-foundation.org, chengming.zhou@linux.dev, david@redhat.com, hannes@cmpxchg.org, hughd@google.com, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, nphamcs@gmail.com, shakeel.butt@linux.dev, willy@infradead.org, ying.huang@intel.com, hanchuanhua@oppo.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Queue-Id: D0742140004 X-Rspamd-Server: rspam01 X-Stat-Signature: iya9rutbank3m67wdj9g1ezqq3fyhufa X-HE-Tag: 1725522940-824618 X-HE-Meta: U2FsdGVkX1/MEUfd1I92e3z9/DvCvkvO2lgHymUDTaUmRgje37e0eDYZMAnbYQInah39hm9rUadU27EdWl9NN2u0Vgm1JBWfH7yhWsOtvPM73hvQi6MrD1uwmiS7hywxONzzJSrhcNY5r2TDHPluoSdmBxFSGgy1ASFQw1R8Trgais1YYtFX30MO23293Dtljxt/JQh8FM7Fatl084saIjM9ZFOvrSkTIaKH2rXrxmnh3UYBiUV3EGofWe/LqzqRfEPxMAjMMNyfoNJJhgjVH9T6lZc9cHV9YN/+OZKXGprCowpe4ka7ZxPXWcCwf+C6mXSXfV2MfODiW8+j6MzilZR8QLhGrKF94ODInRi70As1JsqYo5c0dOmdskQRa9+ZXxePTrV+gIDbYpFK0jDdejyhSzlm64CixXjucRwfO3zuZAziVx7c4GXlfk9mVDdvaGiTukmrPcvtw5csUXZ1Z/04x2Di3YZIxCnBH3zfKudZRO71h8QwlP5vfoCCJ0ZkdMtfCmKeKw/niJJrO3cAwIUkjt1aW61oLu9qTHnf/rEcp0XxNrHvUrEbJ5s7eH5/JgnhsLDuXYCQNZxq6uQlYu4ltUYTKl0qkWfA/yqRAI4l9VB9ExTaTNsAt5OvVWlME277m7DIr8kDWIPTkonER19hKnYx/eznQ8q3V8vGp1vPXfOIY+2nDiJXJWfaIYwY8ygc3hmxWUiiFopcNXXadzfPgOsDo39OSBQYy7x/21uS4Vo+eGWv86Ed4zA9QHJJv/50wFDj6WuMPIMQ8PlBgMDzoEnj5geCCHCEwBP/2Rk2lYt5tQjQBMdRG6QJqSZXzx2B1+9khX+7Dh6avNrSR5fEmWESNN4OZ923O6qlu+zbvDNJFeYzSdHCfZX4+Pb6/arhGH6MFrHAdPb8QuHLJ3l2xWoTnH69SXR6/S/3BrWbqb/P0HC8JgO9wuOVXeAgtoC5slGgp0oNEV/IgfC zfOW6wrA Zik48nJbptNzJiDBgwYjb7RTeth/bHXCgJu6Aip/kIxDmFj5dai2lwXzwSsTShFByeG2ujwq5HlgCJahTyVxzgXRC41/TYRTfzYJfJpAMnN5Adj7iXHdjQXqZ5qB+GDY40o8K4EzfVN8Wye/+E969ddenchJsgD75uB8GsWos/GXZrhbvY5xUb4oHs9bdf1uH+Rjcy4Tu7M1TLJDSDnTmrTR6ShujVQuMmCjMilyvrTTM961V4yjW2e5090j6iLlTRn5G5BJBsNzB9BA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Sep 5, 2024 at 12:03=E2=80=AFAM Barry Song <21cnbao@gmail.com> wrot= e: > > On Thu, Sep 5, 2024 at 5:41=E2=80=AFAM Yosry Ahmed wrote: > > > > [..] > > > > I understand the point of doing this to unblock the synchronous lar= ge > > > > folio swapin support work, but at some point we're gonna have to > > > > actually handle the cases where a large folio being swapped in is > > > > partially in the swap cache, zswap, the zeromap, etc. > > > > > > > > All these cases will need similar-ish handling, and I suspect we wo= n't > > > > just skip swapping in large folios in all these cases. > > > > > > I agree that this is definitely the goal. `swap_read_folio()` should = be a > > > dependable API that always returns reliable data, regardless of wheth= er > > > `zeromap` or `zswap` is involved. Despite these issues, mTHP swap-in = shouldn't > > > be held back. Significant efforts are underway to support large folio= s in > > > `zswap`, and progress is being made. Not to mention we've already all= owed > > > `zeromap` to proceed, even though it doesn't support large folios. > > > > > > It's genuinely unfair to let the lack of mTHP support in `zeromap` an= d > > > `zswap` hold swap-in hostage. > > > > Hi Yosry, > > > Well, two points here: > > > > 1. I did not say that we should block the synchronous mTHP swapin work > > for this :) I said the next item on the TODO list for mTHP swapin > > support should be handling these cases. > > Thanks for your clarification! > > > > > 2. I think two things are getting conflated here. Zswap needs to > > support mTHP swapin*. Zeromap already supports mTHPs AFAICT. What is > > truly, and is outside the scope of zswap/zeromap, is being able to > > support hybrid mTHP swapin. > > > > When swapping in an mTHP, the swapped entries can be on disk, in the > > swapcache, in zswap, or in the zeromap. Even if all these things > > support mTHPs individually, we essentially need support to form an > > mTHP from swap entries in different backends. That's what I meant. > > Actually if we have that, we may not really need mTHP swapin support > > in zswap, because we can just form the large folio in the swap layer > > from multiple zswap entries. > > > > After further consideration, I've actually started to disagree with the i= dea > of supporting hybrid swapin (forming an mTHP from swap entries in differe= nt > backends). My reasoning is as follows: I do not have any data about this, so you could very well be right here. Handling hybrid swapin could be simply falling back to the smallest order we can swapin from a single backend. We can at least start with this, and collect data about how many mTHP swapins fallback due to hybrid backends. This way we only take the complexity if needed. I did imagine though that it's possible for two virtually contiguous folios to be swapped out to contiguous swap entries and end up in different media (e.g. if only one of them is zero-filled). I am not sure how rare it would be in practice. > > 1. The scenario where an mTHP is partially zeromap, partially zswap, etc.= , > would be an extremely rare case, as long as we're swapping out the mTHP a= s > a whole and all the modules are handling it accordingly. It's highly > unlikely to form this mix of zeromap, zswap, and swapcache unless the > contiguous VMA virtual address happens to get some small folios with > aligned and contiguous swap slots. Even then, they would need to be > partially zeromap and partially non-zeromap, zswap, etc. As I mentioned, we can start simple and collect data for this. If it's rare and we don't need to handle it, that's good. > > As you mentioned, zeromap handles mTHP as a whole during swapping > out, marking all subpages of the entire mTHP as zeromap rather than just > a subset of them. > > And swap-in can also entirely map a swapcache which is a large folio base= d > on our previous patchset which has been in mainline: > "mm: swap: entirely map large folios found in swapcache" > https://lore.kernel.org/all/20240529082824.150954-1-21cnbao@gmail.com/ > > It seems the only thing we're missing is zswap support for mTHP. It is still possible for two virtually contiguous folios to be swapped out to contiguous swap entries. It is also possible that a large folio is swapped out as a whole, then only a part of it is swapped in later due to memory pressure. If that part is later reclaimed again and gets added to the swapcache, we can run into the hybrid swapin situation. There may be other scenarios as well, I did not think this through. > > 2. Implementing hybrid swap-in would be extremely tricky and could disrup= t > several software layers. I can share some pseudo code below: Yeah it definitely would be complex, so we need proper justification for it= . > > swap_read_folio() > { > if (zeromap_full) > folio_read_from_zeromap() > else if (zswap_map_full) > folio_read_from_zswap() > else { > folio_read_from_swapfile() > if (zeromap_partial) > folio_read_from_zeromap_fixup() /* fill zero > for partially zeromap subpages */ > if (zwap_partial) > folio_read_from_zswap_fixup() /* zswap_load > for partially zswap-mapped subpages */ > > folio_mark_uptodate() > folio_unlock() > } > > We'd also need to modify folio_read_from_swapfile() to skip > folio_mark_uptodate() > and folio_unlock() after completing the BIO. This approach seems to > entirely disrupt > the software layers. > > This could also lead to unnecessary IO operations for subpages that > require fixup. > Since such cases are quite rare, I believe the added complexity isn't wor= th it. > > My point is that we should simply check that all PTEs have consistent zer= omap, > zswap, and swapcache statuses before proceeding, otherwise fall back to t= he next > lower order if needed. This approach improves performance and avoids comp= lex > corner cases. Agree that we should start with that, although we should probably fallback to the largest order we can swapin from a single backend, rather than the next lower order. > > So once zswap mTHP is there, I would also expect an API similar to > swap_zeromap_entries_check() > for example: > zswap_entries_check(entry, nr) which can return if we are having > full, non, and partial zswap to replace the existing > zswap_never_enabled(). I think a better API would be similar to what Usama had. Basically take in (entry, nr) and return how much of it is in zswap starting at entry, so that we can decide the swapin order. Maybe we can adjust your proposed swap_zeromap_entries_check() as well to do that? Basically return the number of swap entries in the zeromap starting at 'entry'. If 'entry' itself is not in the zeromap we return 0 naturally. That would be a small adjustment/fix over what Usama had, but implementing it with bitmap operations like you did would be better. > > Though I am not sure how cheap zswap can implement it, > swap_zeromap_entries_check() > could be two simple bit operations: > > +static inline zeromap_stat_t swap_zeromap_entries_check(swp_entry_t > entry, int nr) > +{ > + struct swap_info_struct *sis =3D swp_swap_info(entry); > + unsigned long start =3D swp_offset(entry); > + unsigned long end =3D start + nr; > + > + if (find_next_bit(sis->zeromap, end, start) =3D=3D end) > + return SWAP_ZEROMAP_NON; > + if (find_next_zero_bit(sis->zeromap, end, start) =3D=3D end) > + return SWAP_ZEROMAP_FULL; > + > + return SWAP_ZEROMAP_PARTIAL; > +} > > 3. swapcache is different from zeromap and zswap. Swapcache indicates > that the memory > is still available and should be re-mapped rather than allocating a > new folio. Our previous > patchset has implemented a full re-map of an mTHP in do_swap_page() as me= ntioned > in 1. > > For the same reason as point 1, partial swapcache is a rare edge case. > Not re-mapping it > and instead allocating a new folio would add significant complexity. > > > > > > > Nonetheless, `zeromap` and `zswap` are distinct cases. With `zeromap`= , we > > > permit almost all mTHP swap-ins, except for those rare situations whe= re > > > small folios that were swapped out happen to have contiguous and alig= ned > > > swap slots. > > > > > > swapcache is another quite different story, since our user scenarios = begin from > > > the simplest sync io on mobile phones, we don't quite care about swap= cache. > > > > Right. The reason I bring this up is as I mentioned above, there is a > > common problem of forming large folios from different sources, which > > includes the swap cache. The fact that synchronous swapin does not use > > the swapcache was a happy coincidence for you, as you can add support > > mTHP swapins without handling this case yet ;) > > As I mentioned above, I'd really rather filter out those corner cases > than support > them, not just for the current situation to unlock swap-in series :-) If they are indeed corner cases, then I definitely agree.