From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCC19CD5BA7 for ; Thu, 5 Sep 2024 10:42:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4846E6B00DB; Thu, 5 Sep 2024 06:42:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 40E086B00DC; Thu, 5 Sep 2024 06:42:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2382A6B00DD; Thu, 5 Sep 2024 06:42:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id F006C6B00DB for ; Thu, 5 Sep 2024 06:42:18 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 9B6C41A1B1B for ; Thu, 5 Sep 2024 10:42:18 +0000 (UTC) X-FDA: 82530345156.18.FB60C04 Received: from mail-ua1-f44.google.com (mail-ua1-f44.google.com [209.85.222.44]) by imf13.hostedemail.com (Postfix) with ESMTP id BD5CB2000E for ; Thu, 5 Sep 2024 10:42:16 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Z08FU0eq; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf13.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.44 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725532828; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vxtb4aB/vMwtX//+yIrmVx2E+pKuYt+yk5iWYPlWftA=; b=gLlypaQz+UZhL5HCw9Sg/aM+E7eeMYmXz+hiJyscOAIpglwBmBq1O+AhmgFqBHbiCTt7Kh TpRSENm/3VtGApuO52edDIWw5aOkKLJ4fDL4lMoqMsHfWqfZAVhOCNUAiXv8paDqB5+f2X gn3FF21KO4+B5uRlKwJmblcFMCKL7t0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725532828; a=rsa-sha256; cv=none; b=8ILl++MPXd34vnTn4Z/N0U2yQ/b9Imee2vX77xDQYPnN1T894yTit/oI0/1OF45QQ5BtXm Dj6UMaTKCJv9KzaoMxdU5ACZjYiAOOfn09TBBZdxLVpC3kop3tTD+9yarwswKWNg36Jjt5 7ddOcjn7IHdXs9XnApo5PgOQPmALxKg= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Z08FU0eq; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf13.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.44 as permitted sender) smtp.mailfrom=21cnbao@gmail.com Received: by mail-ua1-f44.google.com with SMTP id a1e0cc1a2514c-846c59979efso205597241.3 for ; Thu, 05 Sep 2024 03:42:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1725532936; x=1726137736; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=vxtb4aB/vMwtX//+yIrmVx2E+pKuYt+yk5iWYPlWftA=; b=Z08FU0eqnW9T5tmAJP73kk5gO2nppIISvi6HbXc3hfmgalEEVQ9ZnVNYk5KppUxPJp e/4/Jx7g3v/gtVD3aspzLOPjxL9C+Aa1xhQbx81h/Vg63uuTAWaR8FkZRc1IzQnGTEKJ ByLCkZEndUVsQler5pUi5KweOS9tQraKTn8kKxdRD6UBEs6S1Foev+1bDfQMNor7kxGV YzkCAxNXanPTvjI/BFv8CAMbkI9LVhiKAAk35zE/c1cYNNF7sYjYEnhBAEJ1rosVtwgr 8ILVXa16Ztqjmu1pDbC/DkRdwF0JGTOb1XiEduyYk50rcLQlAZlF4fppK7enmFsAifCl TRtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725532936; x=1726137736; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vxtb4aB/vMwtX//+yIrmVx2E+pKuYt+yk5iWYPlWftA=; b=fYF/DtwlmefVhcTCoyoC6hzTEBXaVMCA/3M1Umiq6OImmVon1TNRTHoQ2w/tIXLtCY G6Jv4zYJ9pFEsDRHmQ2EKi/Lful9z53fali97BUy67MK0tjhp15WvyLMRoL2jJnEW8pH q7ZvKlAJOOq+IYsG5aliJRhHrFgBOKR7TUvce8NnKIsvHPAvqWPeJ9T4qUxor0znvh6W KLVIjC9Ync0buPEpM8FYbu8K+usmzWMRHQx57IuTblu7u3ayciY8ktJdszUbiLCQ/FLw 5brnZ++C7wNx/HbCoXsAoP+mHgmwq+deVRiAMxAO3umvsD4hawc30/xOXQn1pCWx7MQb KMkg== X-Forwarded-Encrypted: i=1; AJvYcCXAqeQWnp+vZyw3E5vr0cn+xz2qcMqqyh1DBfmW819jU6or6xlrW2ivzkXUB6/MSBYfYNREqD+hcA==@kvack.org X-Gm-Message-State: AOJu0YzhYtifDZF6i0kWR8iVaLcOYiDhEXHBmo0vu1QTl1lz8NqKGj4f ah/sznwQ1zMwdOXafa4d6lwuTvPl179QTe5L1ow/pDCBpf/zid+OClKs18IELaWX3Yd2gEGRRnb uoQUGjv6Gd550U4/7/5PhG1S3kNU= X-Google-Smtp-Source: AGHT+IFJcZgaTG1WGVJOAyzKzan2cpe4vqj225xtLQ3JDyHR07wofDr26ua60JB7Xj24dAF2yZt7FojabBmZZ/okaI0= X-Received: by 2002:a05:6102:e0e:b0:492:9c55:aec5 with SMTP id ada2fe7eead31-49bad2b2086mr10261663137.15.1725532935747; Thu, 05 Sep 2024 03:42:15 -0700 (PDT) MIME-Version: 1.0 References: <20240612124750.2220726-2-usamaarif642@gmail.com> <20240904055522.2376-1-21cnbao@gmail.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Thu, 5 Sep 2024 22:42:03 +1200 Message-ID: Subject: Re: [PATCH v4 1/2] mm: store zero pages to be swapped out in a bitmap To: Usama Arif Cc: Yosry Ahmed , akpm@linux-foundation.org, chengming.zhou@linux.dev, david@redhat.com, hannes@cmpxchg.org, hughd@google.com, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, nphamcs@gmail.com, shakeel.butt@linux.dev, willy@infradead.org, ying.huang@intel.com, hanchuanhua@oppo.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: BD5CB2000E X-Stat-Signature: bxm1bme6uze69kym6jr9598zdeiernjj X-Rspam-User: X-HE-Tag: 1725532936-746312 X-HE-Meta: U2FsdGVkX181Ku7nlniRrANfZzYI/4oV4MfdLbw8jkYCcVxRBf+FROw69CppGP9BIy1FA8ZsYsQHMtFePqVCNAECAb/l+ZWByPbFW44o3kUvzQYo5byonu4Nk6oDCujaaVbSNJ1r7wHJUOcPZRSHhQfUk+ffsv3XhXPbaG2OwGKTEf/ULMWqu/IQbaIBRqbpr1pbR4RyCXydM7ZZC5CAUB8yDZvM/KVgI7L/0++o3sBUADOissOABW2sKeOAznIYertqPAEuStSaIYJYw6DOGdfJF1wZkWGbq/L5XJRKxBc0+TF2oa3wwpCntIqr0j5v6axyYNaMN1PbCMsI67h5e6BOLoOp+Gd1YItes5HdaciBj60qETfbzN3BXbKXJrWTSaSBUvEbLtq+YjDOMVFiF76XatrpNYFA9P9qSsxWouKqJW8J9Q8OcAAAHdQXTQ3k00gCDyJfX/wIACFTrJaUhpnwj7Qg8NAj67VInSuorfoTQeC8nGXIqtCy5Dm92R5sK8HUP3vpsBYpfeqtcahOLdTiaZNAiNa6HLyV2gzvLtUNyOI59Y7hx61QpapPrMXsTx9ubleWQmZaugU3Khs8SjAubic9Am026/kktCpw8sngikRsGimBMPhEdSac4nqEdqlS41/abMnd67jAeyeuWxLmRb+W6NYfoXEP1gSGS5dcuRNBZMXfiOz2jhTUpzyxyu0FZylmEH2e6tUHe+FS6YWw37GEXD7e0JE2px6wQjnKZsa0ZJGa1uWP//m2bEmd9gbBtgcT2CR0NU5fq279PmKrGa+wn1WdUKmOk2Y456tyEM/rnocSlnEI4BSv7xWi9POEZPJBthKu1b+qRdcT9M73EN+MdaUhkmP8a25aJlNij9482YAzlNaZ8y+6U99bRIUoq2iPA94jacJZy8GrHFNTD7QJawA6Lqyg2sOKicVKs+91YKraP/Rtc7x4pJkXgmHIcp0gWTzhwN/44eQ tpyno325 zjTBCptddr4KrKDWmsdQ+Xwq4mfJsFb51YkdJtMongnaZSJt2HcnvRu3r1NuLywuq2YmdZ7WiK2TQmZ5vMtwRCSzhbOgQ75vBoLzCp1TG5Qgtjpqt0znhaLlDoMRGG9hcH3yReCQKL90Z13mVPn0U6z+yChLMnfIzDOVFP25AICOfZfldazj661NwxtbmTd/Y2YKzFM4Vjvv1HlCOeS/QVIxclZieB3LoDShg4wQuWQHDXjks+jCcte2I78tkkdI7cDntXqlKocaVA5rLasihuK5UQkyky7pPywj0DzPf88ofR7mLjdMw+kLKiLYokuJLYhMVPWN46wkMHVwaoUD6ZVDm6x/cDYdcXMF2/SQuOy+Zs5/rZNwxkAAXj23JApQntxbW3DHA4ps/ldw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Sep 5, 2024 at 10:37=E2=80=AFPM Usama Arif = wrote: > > > > On 05/09/2024 11:10, Barry Song wrote: > > On Thu, Sep 5, 2024 at 8:49=E2=80=AFPM Barry Song <21cnbao@gmail.com> w= rote: > >> > >> On Thu, Sep 5, 2024 at 7:55=E2=80=AFPM Yosry Ahmed wrote: > >>> > >>> On Thu, Sep 5, 2024 at 12:03=E2=80=AFAM Barry Song <21cnbao@gmail.com= > wrote: > >>>> > >>>> On Thu, Sep 5, 2024 at 5:41=E2=80=AFAM Yosry Ahmed wrote: > >>>>> > >>>>> [..] > >>>>>>> I understand the point of doing this to unblock the synchronous l= arge > >>>>>>> folio swapin support work, but at some point we're gonna have to > >>>>>>> actually handle the cases where a large folio being swapped in is > >>>>>>> partially in the swap cache, zswap, the zeromap, etc. > >>>>>>> > >>>>>>> All these cases will need similar-ish handling, and I suspect we = won't > >>>>>>> just skip swapping in large folios in all these cases. > >>>>>> > >>>>>> I agree that this is definitely the goal. `swap_read_folio()` shou= ld be a > >>>>>> dependable API that always returns reliable data, regardless of wh= ether > >>>>>> `zeromap` or `zswap` is involved. Despite these issues, mTHP swap-= in shouldn't > >>>>>> be held back. Significant efforts are underway to support large fo= lios in > >>>>>> `zswap`, and progress is being made. Not to mention we've already = allowed > >>>>>> `zeromap` to proceed, even though it doesn't support large folios. > >>>>>> > >>>>>> It's genuinely unfair to let the lack of mTHP support in `zeromap`= and > >>>>>> `zswap` hold swap-in hostage. > >>>>> > >>>> > >>>> Hi Yosry, > >>>> > >>>>> Well, two points here: > >>>>> > >>>>> 1. I did not say that we should block the synchronous mTHP swapin w= ork > >>>>> for this :) I said the next item on the TODO list for mTHP swapin > >>>>> support should be handling these cases. > >>>> > >>>> Thanks for your clarification! > >>>> > >>>>> > >>>>> 2. I think two things are getting conflated here. Zswap needs to > >>>>> support mTHP swapin*. Zeromap already supports mTHPs AFAICT. What i= s > >>>>> truly, and is outside the scope of zswap/zeromap, is being able to > >>>>> support hybrid mTHP swapin. > >>>>> > >>>>> When swapping in an mTHP, the swapped entries can be on disk, in th= e > >>>>> swapcache, in zswap, or in the zeromap. Even if all these things > >>>>> support mTHPs individually, we essentially need support to form an > >>>>> mTHP from swap entries in different backends. That's what I meant. > >>>>> Actually if we have that, we may not really need mTHP swapin suppor= t > >>>>> in zswap, because we can just form the large folio in the swap laye= r > >>>>> from multiple zswap entries. > >>>>> > >>>> > >>>> After further consideration, I've actually started to disagree with = the idea > >>>> of supporting hybrid swapin (forming an mTHP from swap entries in di= fferent > >>>> backends). My reasoning is as follows: > >>> > >>> I do not have any data about this, so you could very well be right > >>> here. Handling hybrid swapin could be simply falling back to the > >>> smallest order we can swapin from a single backend. We can at least > >>> start with this, and collect data about how many mTHP swapins fallbac= k > >>> due to hybrid backends. This way we only take the complexity if > >>> needed. > >>> > >>> I did imagine though that it's possible for two virtually contiguous > >>> folios to be swapped out to contiguous swap entries and end up in > >>> different media (e.g. if only one of them is zero-filled). I am not > >>> sure how rare it would be in practice. > >>> > >>>> > >>>> 1. The scenario where an mTHP is partially zeromap, partially zswap,= etc., > >>>> would be an extremely rare case, as long as we're swapping out the m= THP as > >>>> a whole and all the modules are handling it accordingly. It's highly > >>>> unlikely to form this mix of zeromap, zswap, and swapcache unless th= e > >>>> contiguous VMA virtual address happens to get some small folios with > >>>> aligned and contiguous swap slots. Even then, they would need to be > >>>> partially zeromap and partially non-zeromap, zswap, etc. > >>> > >>> As I mentioned, we can start simple and collect data for this. If it'= s > >>> rare and we don't need to handle it, that's good. > >>> > >>>> > >>>> As you mentioned, zeromap handles mTHP as a whole during swapping > >>>> out, marking all subpages of the entire mTHP as zeromap rather than = just > >>>> a subset of them. > >>>> > >>>> And swap-in can also entirely map a swapcache which is a large folio= based > >>>> on our previous patchset which has been in mainline: > >>>> "mm: swap: entirely map large folios found in swapcache" > >>>> https://lore.kernel.org/all/20240529082824.150954-1-21cnbao@gmail.co= m/ > >>>> > >>>> It seems the only thing we're missing is zswap support for mTHP. > >>> > >>> It is still possible for two virtually contiguous folios to be swappe= d > >>> out to contiguous swap entries. It is also possible that a large foli= o > >>> is swapped out as a whole, then only a part of it is swapped in later > >>> due to memory pressure. If that part is later reclaimed again and get= s > >>> added to the swapcache, we can run into the hybrid swapin situation. > >>> There may be other scenarios as well, I did not think this through. > >>> > >>>> > >>>> 2. Implementing hybrid swap-in would be extremely tricky and could d= isrupt > >>>> several software layers. I can share some pseudo code below: > >>> > >>> Yeah it definitely would be complex, so we need proper justification = for it. > >>> > >>>> > >>>> swap_read_folio() > >>>> { > >>>> if (zeromap_full) > >>>> folio_read_from_zeromap() > >>>> else if (zswap_map_full) > >>>> folio_read_from_zswap() > >>>> else { > >>>> folio_read_from_swapfile() > >>>> if (zeromap_partial) > >>>> folio_read_from_zeromap_fixup() /* fill zero > >>>> for partially zeromap subpages */ > >>>> if (zwap_partial) > >>>> folio_read_from_zswap_fixup() /* zswap_load > >>>> for partially zswap-mapped subpages */ > >>>> > >>>> folio_mark_uptodate() > >>>> folio_unlock() > >>>> } > >>>> > >>>> We'd also need to modify folio_read_from_swapfile() to skip > >>>> folio_mark_uptodate() > >>>> and folio_unlock() after completing the BIO. This approach seems to > >>>> entirely disrupt > >>>> the software layers. > >>>> > >>>> This could also lead to unnecessary IO operations for subpages that > >>>> require fixup. > >>>> Since such cases are quite rare, I believe the added complexity isn'= t worth it. > >>>> > >>>> My point is that we should simply check that all PTEs have consisten= t zeromap, > >>>> zswap, and swapcache statuses before proceeding, otherwise fall back= to the next > >>>> lower order if needed. This approach improves performance and avoids= complex > >>>> corner cases. > >>> > >>> Agree that we should start with that, although we should probably > >>> fallback to the largest order we can swapin from a single backend, > >>> rather than the next lower order. > >>> > >>>> > >>>> So once zswap mTHP is there, I would also expect an API similar to > >>>> swap_zeromap_entries_check() > >>>> for example: > >>>> zswap_entries_check(entry, nr) which can return if we are having > >>>> full, non, and partial zswap to replace the existing > >>>> zswap_never_enabled(). > >>> > >>> I think a better API would be similar to what Usama had. Basically > >>> take in (entry, nr) and return how much of it is in zswap starting at > >>> entry, so that we can decide the swapin order. > >>> > >>> Maybe we can adjust your proposed swap_zeromap_entries_check() as wel= l > >>> to do that? Basically return the number of swap entries in the zeroma= p > >>> starting at 'entry'. If 'entry' itself is not in the zeromap we retur= n > >>> 0 naturally. That would be a small adjustment/fix over what Usama had= , > >>> but implementing it with bitmap operations like you did would be > >>> better. > >> > >> I assume you means the below > >> > >> /* > >> * Return the number of contiguous zeromap entries started from entry > >> */ > >> static inline unsigned int swap_zeromap_entries_count(swp_entry_t entr= y, int nr) > >> { > >> struct swap_info_struct *sis =3D swp_swap_info(entry); > >> unsigned long start =3D swp_offset(entry); > >> unsigned long end =3D start + nr; > >> unsigned long idx; > >> > >> idx =3D find_next_bit(sis->zeromap, end, start); > >> if (idx !=3D start) > >> return 0; > >> > >> return find_next_zero_bit(sis->zeromap, end, start) - idx; > >> } > >> > >> If yes, I really like this idea. > >> > >> It seems much better than using an enum, which would require adding a = new > >> data structure :-) Additionally, returning the number allows callers > >> to fall back > >> to the largest possible order, rather than trying next lower orders > >> sequentially. > > > > No, returning 0 after only checking first entry would still reintroduce > > the current bug, where the start entry is zeromap but other entries > > might not be. We need another value to indicate whether the entries > > are consistent if we want to avoid the enum: > > > > /* > > * Return the number of contiguous zeromap entries started from entry; > > * If all entries have consistent zeromap, *consistent will be true; > > * otherwise, false; > > */ > > static inline unsigned int swap_zeromap_entries_count(swp_entry_t entry= , > > int nr, bool *consistent) > > { > > struct swap_info_struct *sis =3D swp_swap_info(entry); > > unsigned long start =3D swp_offset(entry); > > unsigned long end =3D start + nr; > > unsigned long s_idx, c_idx; > > > > s_idx =3D find_next_bit(sis->zeromap, end, start); > > In all of the implementations you sent, you are using find_next_bit(..,en= d, start), but > I believe it should be find_next_bit(..,nr, start)? I guess no, the tricky thing is that size means the size from the first bit= of bitmap but not from the "start" bit? > > TBH, I liked the enum implementation you had in https://lore.kernel.org/a= ll/20240905002926.1055-1-21cnbao@gmail.com/ > Its the easiest to review and understand, and least likely to introduce a= ny bugs. > But it could be a personal preference. > The likelihood of having contiguous zeromap entries *that* is less than n= r is very low right? > If so we could go with the enum implementation? what about the bool impementation i sent in the last email, it seems the simplest code. > > > > if (s_idx =3D=3D end) { > > *consistent =3D true; > > return 0; > > } > > > > c_idx =3D find_next_zero_bit(sis->zeromap, end, start); > > if (c_idx =3D=3D end) { > > *consistent =3D true; > > return nr; > > } > > > > *consistent =3D false; > > if (s_idx =3D=3D start) > > return 0; > > return c_idx - s_idx; > > } > > > > I can actually switch the places of the "consistent" and returned > > number if that looks > > better. > > > >> > >> Hi Usama, > >> what is your take on this? > >> > >>> > >>>> > >>>> Though I am not sure how cheap zswap can implement it, > >>>> swap_zeromap_entries_check() > >>>> could be two simple bit operations: > >>>> > >>>> +static inline zeromap_stat_t swap_zeromap_entries_check(swp_entry_t > >>>> entry, int nr) > >>>> +{ > >>>> + struct swap_info_struct *sis =3D swp_swap_info(entry); > >>>> + unsigned long start =3D swp_offset(entry); > >>>> + unsigned long end =3D start + nr; > >>>> + > >>>> + if (find_next_bit(sis->zeromap, end, start) =3D=3D end) > >>>> + return SWAP_ZEROMAP_NON; > >>>> + if (find_next_zero_bit(sis->zeromap, end, start) =3D=3D end) > >>>> + return SWAP_ZEROMAP_FULL; > >>>> + > >>>> + return SWAP_ZEROMAP_PARTIAL; > >>>> +} > >>>> > >>>> 3. swapcache is different from zeromap and zswap. Swapcache indicate= s > >>>> that the memory > >>>> is still available and should be re-mapped rather than allocating a > >>>> new folio. Our previous > >>>> patchset has implemented a full re-map of an mTHP in do_swap_page() = as mentioned > >>>> in 1. > >>>> > >>>> For the same reason as point 1, partial swapcache is a rare edge cas= e. > >>>> Not re-mapping it > >>>> and instead allocating a new folio would add significant complexity. > >>>> > >>>>>> > >>>>>> Nonetheless, `zeromap` and `zswap` are distinct cases. With `zerom= ap`, we > >>>>>> permit almost all mTHP swap-ins, except for those rare situations = where > >>>>>> small folios that were swapped out happen to have contiguous and a= ligned > >>>>>> swap slots. > >>>>>> > >>>>>> swapcache is another quite different story, since our user scenari= os begin from > >>>>>> the simplest sync io on mobile phones, we don't quite care about s= wapcache. > >>>>> > >>>>> Right. The reason I bring this up is as I mentioned above, there is= a > >>>>> common problem of forming large folios from different sources, whic= h > >>>>> includes the swap cache. The fact that synchronous swapin does not = use > >>>>> the swapcache was a happy coincidence for you, as you can add suppo= rt > >>>>> mTHP swapins without handling this case yet ;) > >>>> > >>>> As I mentioned above, I'd really rather filter out those corner case= s > >>>> than support > >>>> them, not just for the current situation to unlock swap-in series :-= ) > >>> > >>> If they are indeed corner cases, then I definitely agree. > >> Thanks Barry