From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FDFEC48BF6 for ; Mon, 4 Mar 2024 01:34:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C8C016B009D; Sun, 3 Mar 2024 20:34:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C14536B009E; Sun, 3 Mar 2024 20:34:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A66786B009F; Sun, 3 Mar 2024 20:34:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8E2446B009D for ; Sun, 3 Mar 2024 20:34:49 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 5785312094F for ; Mon, 4 Mar 2024 01:34:49 +0000 (UTC) X-FDA: 81857637498.30.1BB6E86 Received: from mail-ua1-f45.google.com (mail-ua1-f45.google.com [209.85.222.45]) by imf01.hostedemail.com (Postfix) with ESMTP id AFE9A4000A for ; Mon, 4 Mar 2024 01:34:47 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=HwLxYBC2; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.45 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709516087; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=H3ZJFjWKIT8PetZOSNxgmgcBQfMtQhvUeTILh6K2Cbs=; b=23xqyxtUxU52UJIdC/zNPNrZ4tjafRZDydYFHguJnm/uzEY/WLU7fUGm1M2yBnM5wpmzKQ oFgW37fRCkcxitxWEeltgbYzsIRfT1dB3CGj8+k+ff4RkPzuCPK+40QPc+B0ne5qtdyYar 5GmuzriwircGmpCudhSprtf9CupBIts= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=HwLxYBC2; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.45 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709516087; a=rsa-sha256; cv=none; b=0rMqlBXMKdPs8tkYfXjXYpPU5uE0oD+7ZdAWBO709OFfudrEyIjbSigZs4cABBLJvd9yDK Jcc2pHK5kAOhA+2RZ/L9V23a5GVsNXUu1DKGDSZGBFlhY4rEsSo51+glwpY8BUcRvQ7ZYz T0g2zV6OAvmfCZ687jzJIVUXQNMWyNA= Received: by mail-ua1-f45.google.com with SMTP id a1e0cc1a2514c-7d5bfdd2366so2011137241.3 for ; Sun, 03 Mar 2024 17:34:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709516086; x=1710120886; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=H3ZJFjWKIT8PetZOSNxgmgcBQfMtQhvUeTILh6K2Cbs=; b=HwLxYBC26911ilWdXjfbF43qZHJwnu9br9Lf9XCRS+zKMd5zlDAGShUZcCeIX4wmY7 coeJi2oDbqop6Pzh+kFIU1fLCXCXfIv7HnymFzOFTYsPj2080HOlL+vFJuaQYMRWJ81d KwojjUo0z2whgGRO3+5eKSqo2SLFJLS8SAWxqrOhM7S5/gct30HWppANpVDzDl6c9gkK RryCimzkZB1jUYN3rvzAJ6a0zXQr8Z/JgfSPgXKjL9SoWejqtF1r2Yj3nuMm9yrb3rPW 4HEPsZdQXsUOHSJyrPXuAs5pil4T3ds3HQP8cne1IiFc+J5uT8fkuhNnMWzdqugXSZ0k sndQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709516086; x=1710120886; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=H3ZJFjWKIT8PetZOSNxgmgcBQfMtQhvUeTILh6K2Cbs=; b=XMoWKzj6IBd2Nr5nbp7gxS//zcNJWDL/bsCxGjhSpP9YnUaDvQyqq2Bltox36Foh6w v3IDN7dfRvTQv7MMWu2pMpimtpMri04CouJyAe3vxoFUKVZYpfxO9Ju6y3CAiKyIA601 GOpGuowp3AJozyz3h9OQEIA+87tmBdaJnHPBKa3Gy2pSRYoS+legcH4HwimgNp667lYk EBgj6a9Y7+MUbWY76UzUGbvJdp1JKr/gBa0n3KHxClKHdheNB18L/f52413LxTBBBFi0 EjK/q8FpIWs1YKTC+PEfiDKRf3f6ufXWpXLlNbKJgNgJFywHzUlzeEFMVrFa8W7B1yQJ GsGA== X-Forwarded-Encrypted: i=1; AJvYcCVtcdq4m7PfHtVbrYUFRzm8AnFhdcK63ZyGqQSkgxOkTcwGhgEIhrVWNZrhLulPkLMdrZAn6RWHYXgU7J6p4tGkYZE= X-Gm-Message-State: AOJu0YwvwQdQHAv6aQADosTKDIa0EE/ciDvIUk70aSjBigqTd5pRHMq+ E6/g1q3A83Nsg9EEntZY77uiSUTm+uGDQfI83jqcLIf29ANQ6MwIByDP0bfWsyLxNzP2aBHM+G3 wKWPO/91KCc+RCzdTLbmjUWOq6/g= X-Google-Smtp-Source: AGHT+IERh+PiA1Qd89wFK4G8QOMOMLkCJxsc4FQYkX5UiHnEJiFUUG2AxIaYWctZuBMcA60K8efoMjkOoas2FK6bPbc= X-Received: by 2002:a05:6102:3754:b0:472:c09e:66a4 with SMTP id u20-20020a056102375400b00472c09e66a4mr1076216vst.18.1709516086551; Sun, 03 Mar 2024 17:34:46 -0800 (PST) MIME-Version: 1.0 References: <20240229003753.134193-1-21cnbao@gmail.com> <20240229003753.134193-6-21cnbao@gmail.com> In-Reply-To: <20240229003753.134193-6-21cnbao@gmail.com> From: Barry Song <21cnbao@gmail.com> Date: Mon, 4 Mar 2024 14:34:35 +1300 Message-ID: Subject: Re: [PATCH RFC v2 5/5] mm: support large folios swapin as a whole To: akpm@linux-foundation.org, david@redhat.com, linux-mm@kvack.org, ryan.roberts@arm.com, chrisl@kernel.org Cc: linux-kernel@vger.kernel.org, mhocko@suse.com, shy828301@gmail.com, steven.price@arm.com, surenb@google.com, wangkefeng.wang@huawei.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yuzhao@google.com, kasong@tencent.com, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, hannes@cmpxchg.org, linux-arm-kernel@lists.infradead.org, Chuanhua Han , Barry Song Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Stat-Signature: rj7eie9eyzwpek59bhzjej6n994bre3t X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: AFE9A4000A X-HE-Tag: 1709516087-986107 X-HE-Meta: U2FsdGVkX19jgClvsIsAveYV45yUU9C35zsnVgH8V0uuy/C0PBzIscs4OoXCv+go2zZ1cUpUjuT/+E64FZ9vj2ojKY/3wyErpmo0zW+DgtlGGckQizgU2JvEJKJRw3b5YbOBBbIha8ydNcHhrUBR3b4pQ/O0rVxbnrVHyyT6sCZvE1tfyuvF9YYTZFllfi2/cCgH8k+cEeER4u6XmKv5fLX7UzInLIwgbJy7vCCVb6ed86fM2f8teiWp/MPE1SMqYW4btEIxbb88ivKtjH2sX2QVgxuY7EXEpwSFPe4iIGE1Sxlqi391bszOMYfvBU+Q1pYvI8RmvUVyYuHTmGnxfLLMNKxgLqo9s35Xs28pk2gB0PeXnPw32tW7cs0lLffgG3uFjMD2JTu+Ay5fSgwvNVgsH6ZxNjROtuhQibnBlzCr2ZM0ZybMh5PdUzaJyr27CM8/xwuHCOHH2LEXWpL9xuhUwC8z13X2ZwtVqsY92d/R9Zrh5zKFmn+YVjXsw2dKHKsh6/mWCzO5vxnAWFM66m1Jw8lAB7eObmI3RIDyqjY5mLLkhWcxjav5MUVJHW+INDgTGvvmZS3SxLW3RunGqO4E/8/VINRNWouuyORafn1Uhkm+v78LbUIXdU0zea178jlzadCMWR0Gr1WBC5OiRPIieroyvfHZzuVweUnqv/LQCmNIG5IDIb37iTHqSsk/B+kbU9mnj72t55mB3QzHePUXwocrvcd+LLLD2XIeVqK6wpF3EsVBGR43YpsLEbw0YnM38TkuXNFh4eN2qrSdAvjW94rFpavtKJ7lfoWPMHPiyv5wi+v6PjX8TCRtiOScUrYGtdSOnTjSPloFlsK+pau0yhZ4QK1aPs7xlKX61TsBs6tcO75f2wYi69Vwr8nVVFk9vk1JBSTPKqJTojjUmigg0VcKWQJvWGiV/oTFlleVv2Gx9ar+h/wRQP92Y1Ec3XMl4RVvKGivsBJE8xK Vc/myUeq Tx7OmOi+p8bgv9XmdSeMwHwLTAaDC+rpx2cQs X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Feb 29, 2024 at 1:39=E2=80=AFPM Barry Song <21cnbao@gmail.com> wrot= e: > > From: Chuanhua Han > > On an embedded system like Android, more than half of anon memory is actu= ally > in swap devices such as zRAM. For example, while an app is switched to ba= ck- > ground, its most memory might be swapped-out. > > Now we have mTHP features, unfortunately, if we don't support large folio= s > swap-in, once those large folios are swapped-out, we immediately lose the > performance gain we can get through large folios and hardware optimizatio= n > such as CONT-PTE. > > This patch brings up mTHP swap-in support. Right now, we limit mTHP swap-= in > to those contiguous swaps which were likely swapped out from mTHP as a wh= ole. > > On the other hand, the current implementation only covers the SWAP_SYCHRO= NOUS > case. It doesn't support swapin_readahead as large folios yet. > > Right now, we are re-faulting large folios which are still in swapcache a= s a > whole, this can effectively decrease extra loops and early-exitings which= we > have increased in arch_swap_restore() while supporting MTE restore for fo= lios > rather than page. On the other hand, it can also decrease do_swap_page as= PTEs > used to be set one by one even we hit a large folio in swapcache. > > Signed-off-by: Chuanhua Han > Co-developed-by: Barry Song > Signed-off-by: Barry Song > --- > mm/memory.c | 191 ++++++++++++++++++++++++++++++++++++++++++---------- > 1 file changed, 157 insertions(+), 34 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 90b08b7cbaac..471689ce4e91 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -104,9 +104,16 @@ struct page *mem_map; > EXPORT_SYMBOL(mem_map); > #endif > > +/* A choice of behaviors for alloc_anon_folio() */ > +enum behavior { > + DO_SWAP_PAGE, > + DO_ANON_PAGE, > +}; > + > static vm_fault_t do_fault(struct vm_fault *vmf); > static vm_fault_t do_anonymous_page(struct vm_fault *vmf); > static bool vmf_pte_changed(struct vm_fault *vmf); > +static struct folio *alloc_anon_folio(struct vm_fault *vmf, enum behavio= r behavior); > > /* > * Return true if the original pte was a uffd-wp pte marker (so the pte = was > @@ -3974,6 +3981,52 @@ static vm_fault_t handle_pte_marker(struct vm_faul= t *vmf) > return VM_FAULT_SIGBUS; > } > > +/* > + * check a range of PTEs are completely swap entries with > + * contiguous swap offsets and the same SWAP_HAS_CACHE. > + * pte must be first one in the range > + */ > +static bool is_pte_range_contig_swap(pte_t *pte, int nr_pages) > +{ > + int i; > + struct swap_info_struct *si; > + swp_entry_t entry; > + unsigned type; > + pgoff_t start_offset; > + char has_cache; > + > + entry =3D pte_to_swp_entry(ptep_get_lockless(pte)); > + if (non_swap_entry(entry)) > + return false; > + start_offset =3D swp_offset(entry); > + if (start_offset % nr_pages) > + return false; > + > + si =3D swp_swap_info(entry); > + type =3D swp_type(entry); > + has_cache =3D si->swap_map[start_offset] & SWAP_HAS_CACHE; > + for (i =3D 1; i < nr_pages; i++) { > + entry =3D pte_to_swp_entry(ptep_get_lockless(pte + i)); > + if (non_swap_entry(entry)) > + return false; > + if (swp_offset(entry) !=3D start_offset + i) > + return false; > + if (swp_type(entry) !=3D type) > + return false; > + /* > + * while allocating a large folio and doing swap_read_fol= io for the > + * SWP_SYNCHRONOUS_IO path, which is the case the being f= aulted pte > + * doesn't have swapcache. We need to ensure all PTEs hav= e no cache > + * as well, otherwise, we might go to swap devices while = the content > + * is in swapcache > + */ > + if ((si->swap_map[start_offset + i] & SWAP_HAS_CACHE) != =3D has_cache) > + return false; > + } > + > + return true; > +} > + > /* > * We enter with non-exclusive mmap_lock (to exclude vma changes, > * but allow concurrent faults), and pte mapped but not yet locked. > @@ -3995,6 +4048,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > pte_t pte; > vm_fault_t ret =3D 0; > void *shadow =3D NULL; > + int nr_pages =3D 1; > + unsigned long start_address; > + pte_t *start_pte; > > if (!pte_unmap_same(vmf)) > goto out; > @@ -4058,28 +4114,32 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > if (!folio) { > if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && > __swap_count(entry) =3D=3D 1) { > - /* > - * Prevent parallel swapin from proceeding with > - * the cache flag. Otherwise, another thread may > - * finish swapin first, free the entry, and swapo= ut > - * reusing the same entry. It's undetectable as > - * pte_same() returns true due to entry reuse. > - */ > - if (swapcache_prepare(entry)) { > - /* Relax a bit to prevent rapid repeated = page faults */ > - schedule_timeout_uninterruptible(1); > - goto out; > - } > - need_clear_cache =3D true; > - > /* skip swapcache */ > - folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0= , > - vma, vmf->address, false)= ; > + folio =3D alloc_anon_folio(vmf, DO_SWAP_PAGE); > page =3D &folio->page; > if (folio) { > __folio_set_locked(folio); > __folio_set_swapbacked(folio); > > + if (folio_test_large(folio)) { > + nr_pages =3D folio_nr_pages(folio= ); > + entry.val =3D ALIGN_DOWN(entry.va= l, nr_pages); > + } > + > + /* > + * Prevent parallel swapin from proceedin= g with > + * the cache flag. Otherwise, another thr= ead may > + * finish swapin first, free the entry, a= nd swapout > + * reusing the same entry. It's undetecta= ble as > + * pte_same() returns true due to entry r= euse. > + */ > + if (swapcache_prepare_nr(entry, nr_pages)= ) { > + /* Relax a bit to prevent rapid r= epeated page faults */ > + schedule_timeout_uninterruptible(= 1); > + goto out; > + } > + need_clear_cache =3D true; > + > if (mem_cgroup_swapin_charge_folio(folio, > vma->vm_mm, GFP_K= ERNEL, > entry)) { > @@ -4185,6 +4245,42 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > */ > vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->addre= ss, > &vmf->ptl); > + > + start_address =3D vmf->address; > + start_pte =3D vmf->pte; > + if (folio_test_large(folio)) { > + unsigned long nr =3D folio_nr_pages(folio); > + unsigned long addr =3D ALIGN_DOWN(vmf->address, nr * PAGE= _SIZE); > + pte_t *aligned_pte =3D vmf->pte - (vmf->address - addr) /= PAGE_SIZE; > + > + /* > + * case 1: we are allocating large_folio, try to map it a= s a whole > + * iff the swap entries are still entirely mapped; > + * case 2: we hit a large folio in swapcache, and all swa= p entries > + * are still entirely mapped, try to map a large folio as= a whole. > + * otherwise, map only the faulting page within the large= folio > + * which is swapcache > + */ > + if (!is_pte_range_contig_swap(aligned_pte, nr)) { > + if (nr_pages > 1) /* ptes have changed for case 1= */ > + goto out_nomap; > + goto check_pte; > + } > + > + start_address =3D addr; > + start_pte =3D aligned_pte; > + /* > + * the below has been done before swap_read_folio() > + * for case 1 > + */ > + if (unlikely(folio =3D=3D swapcache)) { > + nr_pages =3D nr; > + entry.val =3D ALIGN_DOWN(entry.val, nr_pages); > + page =3D &folio->page; > + } > + } > + > +check_pte: > if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig= _pte))) > goto out_nomap; > > @@ -4252,12 +4348,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > * We're already holding a reference on the page but haven't mapp= ed it > * yet. > */ > - swap_free(entry); > + swap_nr_free(entry, nr_pages); > if (should_try_to_free_swap(folio, vma, vmf->flags)) > folio_free_swap(folio); > > - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); > - dec_mm_counter(vma->vm_mm, MM_SWAPENTS); > + folio_ref_add(folio, nr_pages - 1); > + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); > + add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages); > + > pte =3D mk_pte(page, vma->vm_page_prot); > > /* > @@ -4267,14 +4365,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > * exclusivity. > */ > if (!folio_test_ksm(folio) && > - (exclusive || folio_ref_count(folio) =3D=3D 1)) { > + (exclusive || folio_ref_count(folio) =3D=3D nr_pages)) { > if (vmf->flags & FAULT_FLAG_WRITE) { > pte =3D maybe_mkwrite(pte_mkdirty(pte), vma); > vmf->flags &=3D ~FAULT_FLAG_WRITE; > } > rmap_flags |=3D RMAP_EXCLUSIVE; > } > - flush_icache_page(vma, page); > + flush_icache_pages(vma, page, nr_pages); > if (pte_swp_soft_dirty(vmf->orig_pte)) > pte =3D pte_mksoft_dirty(pte); > if (pte_swp_uffd_wp(vmf->orig_pte)) > @@ -4283,17 +4381,19 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > /* ksm created a completely new copy */ > if (unlikely(folio !=3D swapcache && swapcache)) { > - folio_add_new_anon_rmap(folio, vma, vmf->address); > + folio_add_new_anon_rmap(folio, vma, start_address); > folio_add_lru_vma(folio, vma); > + } else if (!folio_test_anon(folio)) { > + folio_add_new_anon_rmap(folio, vma, start_address); > } else { > - folio_add_anon_rmap_pte(folio, page, vma, vmf->address, > + folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, star= t_address, > rmap_flags); > } > > VM_BUG_ON(!folio_test_anon(folio) || > (pte_write(pte) && !PageAnonExclusive(page))); > - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); > - arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_p= te); > + set_ptes(vma->vm_mm, start_address, start_pte, pte, nr_pages); > + arch_do_swap_page(vma->vm_mm, vma, start_address, pte, vmf->orig_= pte); > > folio_unlock(folio); > if (folio !=3D swapcache && swapcache) { > @@ -4310,6 +4410,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > } > > if (vmf->flags & FAULT_FLAG_WRITE) { > + if (nr_pages > 1) > + vmf->orig_pte =3D ptep_get(vmf->pte); > + > ret |=3D do_wp_page(vmf); > if (ret & VM_FAULT_ERROR) > ret &=3D VM_FAULT_ERROR; > @@ -4317,14 +4420,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > } > > /* No need to invalidate - it was non-present before */ > - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); > + update_mmu_cache_range(vmf, vma, start_address, start_pte, nr_pag= es); > unlock: > if (vmf->pte) > pte_unmap_unlock(vmf->pte, vmf->ptl); > out: > /* Clear the swap cache pin for direct swapin after PTL unlock */ > if (need_clear_cache) > - swapcache_clear(si, entry); > + swapcache_clear_nr(si, entry, nr_pages); > if (si) > put_swap_device(si); > return ret; > @@ -4340,7 +4443,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > folio_put(swapcache); > } > if (need_clear_cache) > - swapcache_clear(si, entry); > + swapcache_clear_nr(si, entry, nr_pages); > if (si) > put_swap_device(si); > return ret; > @@ -4358,7 +4461,7 @@ static bool pte_range_none(pte_t *pte, int nr_pages= ) > return true; > } > > -static struct folio *alloc_anon_folio(struct vm_fault *vmf) > +static struct folio *alloc_anon_folio(struct vm_fault *vmf, enum behavio= r behavior) > { > struct vm_area_struct *vma =3D vmf->vma; > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > @@ -4376,6 +4479,19 @@ static struct folio *alloc_anon_folio(struct vm_fa= ult *vmf) > if (unlikely(userfaultfd_armed(vma))) > goto fallback; > > + /* > + * a large folio being swapped-in could be partially in > + * zswap and partially in swap devices, zswap doesn't > + * support large folios yet, we might get corrupted > + * zero-filled data by reading all subpages from swap > + * devices while some of them are actually in zswap > + */ > + if (behavior =3D=3D DO_SWAP_PAGE && is_zswap_enabled()) > + goto fallback; > + > + if (unlikely(behavior !=3D DO_ANON_PAGE && behavior !=3D DO_SWAP_= PAGE)) > + return ERR_PTR(-EINVAL); > + > /* > * Get a list of all the (large) orders below PMD_ORDER that are = enabled > * for this vma. Then filter out the orders that can't be allocat= ed over > @@ -4393,15 +4509,22 @@ static struct folio *alloc_anon_folio(struct vm_f= ault *vmf) > return ERR_PTR(-EAGAIN); > > /* > - * Find the highest order where the aligned range is completely > - * pte_none(). Note that all remaining orders will be completely > + * For do_anonymous_page, find the highest order where the aligne= d range is > + * completely pte_none(). Note that all remaining orders will be = completely > * pte_none(). > + * For do_swap_page, find the highest order where the aligned ran= ge is > + * completely swap entries with contiguous swap offsets. > */ > order =3D highest_order(orders); > while (orders) { > addr =3D ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > - if (pte_range_none(pte + pte_index(addr), 1 << order)) > - break; > + if (behavior =3D=3D DO_ANON_PAGE) { > + if (pte_range_none(pte + pte_index(addr), 1 << or= der)) > + break; > + } else { > + if (is_pte_range_contig_swap(pte + pte_index(addr= ), 1 << order)) > + break; > + } > order =3D next_order(&orders, order); > } We have a problem here. alloc_anon_folio() is charging folio /* Try allocating the highest of the remaining orders. */ gfp =3D vma_thp_gfp_mask(vma); while (orders) { addr =3D ALIGN_DOWN(vmf->address, PAGE_SIZE << order); folio =3D vma_alloc_folio(gfp, order, vma, addr, true); if (folio) { if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) { folio_put(folio); goto next; } folio_throttle_swaprate(folio, gfp); clear_huge_page(&folio->page, vmf->address, 1 << or= der); return folio; } next: order =3D next_order(&orders, order); } This is necessary for DO_ANON_PAGE. but for DO_SWAP_PAGE, this is wrong. because in do_swap_page, mem_cgroup_swapin_charge_folio() is done again. if (mem_cgroup_swapin_charge_folio(folio, vma->vm_mm, GFP_KER= NEL, entry)) { ret =3D VM_FAULT_OOM; goto out_page; } So in the do_swap_page() case, charging is done twice. will get it fixed in v3. This is also true for folio_prealloc() at the end of alloc_anon_folio() > > @@ -4485,7 +4608,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault= *vmf) > if (unlikely(anon_vma_prepare(vma))) > goto oom; > /* Returns NULL on OOM or ERR_PTR(-EAGAIN) if we must retry the f= ault */ > - folio =3D alloc_anon_folio(vmf); > + folio =3D alloc_anon_folio(vmf, DO_ANON_PAGE); > if (IS_ERR(folio)) > return 0; > if (!folio) > -- > 2.34.1 > Thanks Barry