From: Barry Song <21cnbao@gmail.com>
To: Usama Arif <usamaarif642@gmail.com>
Cc: akpm@linux-foundation.org, linux-mm@kvack.org,
hannes@cmpxchg.org, david@redhat.com, willy@infradead.org,
kanchana.p.sridhar@intel.com, yosryahmed@google.com,
nphamcs@gmail.com, chengming.zhou@linux.dev,
ryan.roberts@arm.com, ying.huang@intel.com, riel@surriel.com,
shakeel.butt@linux.dev, kernel-team@meta.com,
linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org
Subject: Re: [RFC 3/4] mm/zswap: add support for large folio zswapin
Date: Mon, 21 Oct 2024 18:49:29 +1300 [thread overview]
Message-ID: <CAGsJ_4xyDMUDxVhi0bzZJ4jAd_Hw8Hn25+4epO9u9=iu0QMdoA@mail.gmail.com> (raw)
In-Reply-To: <20241018105026.2521366-4-usamaarif642@gmail.com>
On Fri, Oct 18, 2024 at 11:50 PM Usama Arif <usamaarif642@gmail.com> wrote:
>
> At time of folio allocation, alloc_swap_folio checks if the entire
> folio is in zswap to determine folio order.
> During swap_read_folio, zswap_load will check if the entire folio
> is in zswap, and if it is, it will iterate through the pages in
> folio and decompress them.
> This will mean the benefits of large folios (fewer page faults, batched
> PTE and rmap manipulation, reduced lru list, TLB coalescing (for arm64
> and amd) are not lost at swap out when using zswap.
> This patch does not add support for hybrid backends (i.e. folios
> partly present swap and zswap).
>
> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
> ---
> mm/memory.c | 13 +++-------
> mm/zswap.c | 68 ++++++++++++++++++++++++-----------------------------
> 2 files changed, 34 insertions(+), 47 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 49d243131169..75f7b9f5fb32 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4077,13 +4077,14 @@ static bool can_swapin_thp(struct vm_fault *vmf, pte_t *ptep, int nr_pages)
>
> /*
> * swap_read_folio() can't handle the case a large folio is hybridly
> - * from different backends. And they are likely corner cases. Similar
> - * things might be added once zswap support large folios.
> + * from different backends. And they are likely corner cases.
> */
> if (unlikely(swap_zeromap_batch(entry, nr_pages, NULL) != nr_pages))
> return false;
> if (unlikely(non_swapcache_batch(entry, nr_pages) != nr_pages))
> return false;
> + if (unlikely(!zswap_present_test(entry, nr_pages)))
> + return false;
>
> return true;
> }
> @@ -4130,14 +4131,6 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf)
> if (unlikely(userfaultfd_armed(vma)))
> goto fallback;
>
> - /*
> - * A large swapped out folio could be partially or fully in zswap. We
> - * lack handling for such cases, so fallback to swapping in order-0
> - * folio.
> - */
> - if (!zswap_never_enabled())
> - goto fallback;
> -
> entry = pte_to_swp_entry(vmf->orig_pte);
> /*
> * Get a list of all the (large) orders below PMD_ORDER that are enabled
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 9cc91ae31116..a5aa86c24060 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1624,59 +1624,53 @@ bool zswap_present_test(swp_entry_t swp, int nr_pages)
>
> bool zswap_load(struct folio *folio)
> {
> + int nr_pages = folio_nr_pages(folio);
> swp_entry_t swp = folio->swap;
> + unsigned int type = swp_type(swp);
> pgoff_t offset = swp_offset(swp);
> bool swapcache = folio_test_swapcache(folio);
> - struct xarray *tree = swap_zswap_tree(swp);
> + struct xarray *tree;
> struct zswap_entry *entry;
> + int i;
>
> VM_WARN_ON_ONCE(!folio_test_locked(folio));
>
> if (zswap_never_enabled())
> return false;
>
> - /*
> - * Large folios should not be swapped in while zswap is being used, as
> - * they are not properly handled. Zswap does not properly load large
> - * folios, and a large folio may only be partially in zswap.
> - *
> - * Return true without marking the folio uptodate so that an IO error is
> - * emitted (e.g. do_swap_page() will sigbus).
> - */
> - if (WARN_ON_ONCE(folio_test_large(folio)))
> - return true;
> -
> - /*
> - * When reading into the swapcache, invalidate our entry. The
> - * swapcache can be the authoritative owner of the page and
> - * its mappings, and the pressure that results from having two
> - * in-memory copies outweighs any benefits of caching the
> - * compression work.
> - *
> - * (Most swapins go through the swapcache. The notable
> - * exception is the singleton fault on SWP_SYNCHRONOUS_IO
> - * files, which reads into a private page and may free it if
> - * the fault fails. We remain the primary owner of the entry.)
> - */
> - if (swapcache)
> - entry = xa_erase(tree, offset);
> - else
> - entry = xa_load(tree, offset);
> -
> - if (!entry)
> + if (!zswap_present_test(folio->swap, nr_pages))
> return false;
Hi Usama,
Is there any chance that zswap_present_test() returns true
in do_swap_page() but false in zswap_load()? If that’s
possible, could we be missing something? For example,
could it be that zswap has been partially released (with
part of it still present) during an mTHP swap-in?
If this happens with an mTHP, my understanding is that
we shouldn't proceed with reading corrupted data from the
disk backend.
>
> - zswap_decompress(entry, &folio->page);
> + for (i = 0; i < nr_pages; ++i) {
> + tree = swap_zswap_tree(swp_entry(type, offset + i));
> + /*
> + * When reading into the swapcache, invalidate our entry. The
> + * swapcache can be the authoritative owner of the page and
> + * its mappings, and the pressure that results from having two
> + * in-memory copies outweighs any benefits of caching the
> + * compression work.
> + *
> + * (Swapins with swap count > 1 go through the swapcache.
> + * For swap count == 1, the swapcache is skipped and we
> + * remain the primary owner of the entry.)
> + */
> + if (swapcache)
> + entry = xa_erase(tree, offset + i);
> + else
> + entry = xa_load(tree, offset + i);
>
> - count_vm_event(ZSWPIN);
> - if (entry->objcg)
> - count_objcg_events(entry->objcg, ZSWPIN, 1);
> + zswap_decompress(entry, folio_page(folio, i));
>
> - if (swapcache) {
> - zswap_entry_free(entry);
> - folio_mark_dirty(folio);
> + if (entry->objcg)
> + count_objcg_events(entry->objcg, ZSWPIN, 1);
> + if (swapcache)
> + zswap_entry_free(entry);
> }
>
> + count_vm_events(ZSWPIN, nr_pages);
> + if (swapcache)
> + folio_mark_dirty(folio);
> +
> folio_mark_uptodate(folio);
> return true;
> }
> --
> 2.43.5
>
Thanks
barry
next prev parent reply other threads:[~2024-10-21 5:49 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-18 10:48 [RFC 0/4] mm: zswap: add support for zswapin of large folios Usama Arif
2024-10-18 10:48 ` [RFC 1/4] mm/zswap: skip swapcache for swapping in zswap pages Usama Arif
2024-10-21 21:09 ` Yosry Ahmed
2024-10-22 19:49 ` Usama Arif
2024-10-23 0:45 ` Yosry Ahmed
2024-10-25 18:19 ` Nhat Pham
2024-10-25 19:10 ` Yosry Ahmed
2024-10-21 21:11 ` Yosry Ahmed
2024-10-22 19:59 ` Usama Arif
2024-10-23 0:47 ` Yosry Ahmed
2024-10-18 10:48 ` [RFC 2/4] mm/zswap: modify zswap_decompress to accept page instead of folio Usama Arif
2024-10-18 10:48 ` [RFC 3/4] mm/zswap: add support for large folio zswapin Usama Arif
2024-10-21 5:49 ` Barry Song [this message]
2024-10-21 10:44 ` Usama Arif
2024-10-21 10:55 ` Barry Song
2024-10-21 12:21 ` Usama Arif
2024-10-21 20:28 ` Barry Song
2024-10-21 20:57 ` Usama Arif
2024-10-21 21:34 ` Yosry Ahmed
2024-10-18 10:48 ` [RFC 4/4] mm/zswap: count successful large folio zswap loads Usama Arif
2024-10-21 5:09 ` [RFC 0/4] mm: zswap: add support for zswapin of large folios Barry Song
2024-10-21 10:40 ` Usama Arif
2024-10-22 15:26 ` Usama Arif
2024-10-22 20:46 ` Barry Song
2024-10-22 21:17 ` Usama Arif
2024-10-22 22:07 ` Barry Song
2024-10-23 10:26 ` Barry Song
2024-10-23 10:48 ` Usama Arif
2024-10-23 13:08 ` Usama Arif
2024-10-23 18:02 ` Yosry Ahmed
2024-10-23 18:31 ` Usama Arif
2024-10-23 18:52 ` Barry Song
2024-10-23 19:47 ` Usama Arif
2024-10-23 20:36 ` Barry Song
2024-10-23 23:35 ` Barry Song
2024-10-24 14:29 ` Johannes Weiner
2024-10-24 17:48 ` Barry Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAGsJ_4xyDMUDxVhi0bzZJ4jAd_Hw8Hn25+4epO9u9=iu0QMdoA@mail.gmail.com' \
--to=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=chengming.zhou@linux.dev \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=kanchana.p.sridhar@intel.com \
--cc=kernel-team@meta.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nphamcs@gmail.com \
--cc=riel@surriel.com \
--cc=ryan.roberts@arm.com \
--cc=shakeel.butt@linux.dev \
--cc=usamaarif642@gmail.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
--cc=yosryahmed@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox