From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60C49C4345F for ; Thu, 11 Apr 2024 23:30:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC1956B0083; Thu, 11 Apr 2024 19:30:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D71EF6B0087; Thu, 11 Apr 2024 19:30:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C12106B0088; Thu, 11 Apr 2024 19:30:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9DA3F6B0083 for ; Thu, 11 Apr 2024 19:30:25 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 52694C0BA5 for ; Thu, 11 Apr 2024 23:30:25 +0000 (UTC) X-FDA: 81998847210.02.72A7A54 Received: from mail-ua1-f47.google.com (mail-ua1-f47.google.com [209.85.222.47]) by imf11.hostedemail.com (Postfix) with ESMTP id 8525E4000A for ; Thu, 11 Apr 2024 23:30:23 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=laQnWeMj; spf=pass (imf11.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.47 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712878223; a=rsa-sha256; cv=none; b=q1x1pIaBeW255fj7QxVLB7NMWtTBPdbfA8f7iii8QjMuIWfPZa/FdrH+rlKgIAcdwj1hrP gEuVlSYQq1A4v13w8jcn/B0b1jj07Bsqsvnadvk5SCcK48lj+DDJh506ZxU3ZB8EqvIxpu t3fmgtN/bZp7Sg16KCQ6XTrH4MlgjKs= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=laQnWeMj; spf=pass (imf11.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.47 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712878223; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bX95sbNIGqEL2oXbznTMfqgObAXwNNFvu8ksu6wMXdA=; b=Mz+9fde0a/jCd3/xygB7OwsDhYIkE1IF/Uq76ALak7GOLXRGlBJ7TevHua0uGkLItPb+/m jfpLBZWGkddReiVzFrQ9wM3mbXNWaNf1S2hfzSJXIY07b/zJlu4iFgZBDEVa+Vb9p6HRPD dRYo8Q6UR0RgfXLcsHHtH/INjKsLdgM= Received: by mail-ua1-f47.google.com with SMTP id a1e0cc1a2514c-7e057fb0b69so92444241.0 for ; Thu, 11 Apr 2024 16:30:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1712878222; x=1713483022; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=bX95sbNIGqEL2oXbznTMfqgObAXwNNFvu8ksu6wMXdA=; b=laQnWeMjvzRUjXj3yYlk+mhFQpOQHtaT+IC1pKxVsKDNpm3+6QegzmGJGGbN2ysXlQ aorqMbEu4Tf3MgpmBsodVqd9eF4+BDHheMOtkOtg7Z6h8ygZ0HqpofcEpVhhEZSJYdK4 zNaRMyvJ+MEpR2pOQOVIDKTFf2tVKf3x78JdqvJcwhDPiVWb6FdPwgR3nhFREfjTbzz7 7o7Vc3VfXlMgbHWs3r0cKrAROXe8eda0UeG4Xym3W8A/+Gxa2sBGt2puRi3FCkIrhbfE 5IpwCd0FWYTYwFFTvBqEz32SIV3Sm0rcNxdZDux65NG4vnQ1VTOuWV9Cxxs4d1n5aPMc oxxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712878222; x=1713483022; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bX95sbNIGqEL2oXbznTMfqgObAXwNNFvu8ksu6wMXdA=; b=Tc6XvRa3oQXKIi6Zp3QJxusOCzhobViEzszkQA5i0/Vv3v31JnKnKOV1snjT0kiXdq xq7cDI9OkHl7DYWSuDc8sKpTZCO2DLD98HBZhik2hkaIEn4xo9zqIEHB8aroirPwjLZ6 fkSOaBaKb6MM8ghCVAw8MgCwQ5Vu4i5LdpGTmrsIYKWvyJlTGYsSkLGyuzJOGX4kr6p8 DFd0SOJjOt/UOxjA0Fwtex+f6u8ME9iGnGVF8CPFLLcN2MC4qX6vQD/B0tcGHDayg/zT 3OHA+GVzL5EqjTcG6OZjdFAwarW6fVi7870ohy8848w+L2PB0ahu7JUGulSfVXgcmfb4 JY6Q== X-Forwarded-Encrypted: i=1; AJvYcCXQrmHluxUT2lV068r0mX/x0fdkaJPbtgWMMVOkJyVW25CqERdnPBtUGN+WVSrAAPJHah8hFoA7Y2U/8clqHTurrZc= X-Gm-Message-State: AOJu0Yxpf8kcy2MqGb0L3pVpVsE3J8OGtpYyzs6r1A2RWKx39vf5tZnF DnRenUFj5Ssl3YM+FPr9yq5Yako9ZAbw3GoRVD+vOSoa7Ls+AexoO6t03GjwtLawKqfFQruiZR9 RL223D3d6aj+BfzItntT7BSCxMoU= X-Google-Smtp-Source: AGHT+IG65nLyjXLTbizTwC9fB1uaqNkf/57nOGQ72lR25Qet0x65uQYMsbI6qyHYcifWEbnoJP/O8gsqgofdOVgk3lU= X-Received: by 2002:a05:6122:4687:b0:4d4:2fbc:e61f with SMTP id di7-20020a056122468700b004d42fbce61fmr1242301vkb.14.1712878222538; Thu, 11 Apr 2024 16:30:22 -0700 (PDT) MIME-Version: 1.0 References: <20240409082631.187483-1-21cnbao@gmail.com> <20240409082631.187483-5-21cnbao@gmail.com> <1008d688-757a-4c2d-86bd-793f5e787d30@arm.com> In-Reply-To: <1008d688-757a-4c2d-86bd-793f5e787d30@arm.com> From: Barry Song <21cnbao@gmail.com> Date: Fri, 12 Apr 2024 11:30:10 +1200 Message-ID: Subject: Re: [PATCH v2 4/5] mm: swap: entirely map large folios found in swapcache To: Ryan Roberts Cc: akpm@linux-foundation.org, linux-mm@kvack.org, baolin.wang@linux.alibaba.com, chrisl@kernel.org, david@redhat.com, hanchuanhua@oppo.com, hannes@cmpxchg.org, hughd@google.com, kasong@tencent.com, surenb@google.com, v-songbaohua@oppo.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yosryahmed@google.com, yuzhao@google.com, ziy@nvidia.com, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 8525E4000A X-Stat-Signature: 1xxuus9csbncq89oxtnanf74o17kriuy X-Rspam-User: X-HE-Tag: 1712878223-125693 X-HE-Meta: U2FsdGVkX18Iv99k5uhp39D2+e5iZp+vklwKb7q9olO4wKk3G622ogZs8p2dH50LNY2IEeoiWl11fCoeCiY2wyycuUUzIFGdGbtLYigwI2zBA/xWlLoJTAhupNJNwm+CBRxZzpN++KiczlfXMwxXbJJOslBcyhCdrhgiFUvYfNHCTK0USAbiiurhwBa13gwXPgCxif4RFbkqk7XPWkDC0Ic/3AZBZtnwqKZebuu0b7jX94tW+gqY7689/u/I8jl9VFxPqQJlI5THLY3lzV4pS9/5rYdm/rPvBS/lsy8cQ3wQ4hELpQvDTWBPw4YEh40/LNVDiwMdUtsG3ZFOonAmp/xZcS6EI2yoY5J5NvvRQzLfGtC87I9RLEmxekLAtBfZnHSK/0+EcTN+7nwSSL1TnhxncQMkSQkCKcB//el0Um2Rqnc9EazJhosHmjSyZHJ6qxpCGBfaBOduymhHaYEdp6m4w01/KAsA/cvhmbGI9xoULf6HweGz2q6FpaY+hcvQwnGcnZXuhduNAOqSVhmM4OdLm0mnUOyznjhJtqibxPDP8s2dICxPF4Cg588fJNUniIPoK5cKNlbxIGqqVisr1TRPf5cibZ8tNgxmUjpTFlzeftjTIorPy9Nk8sX0/QQoWpEDbjMrzbusHxvIM3OpNHeAvUQCmZdl3jD9v4Ukrk/E1IWHCCZDqbdLfkoGf6yzT2IPeE51XmxVZuzvQVASPhXvoX7gnCpeBqSZ7sltibwKkUZPNuzkrPOvL4tgz8ieQykLI0Dv42qVMrsf4v4Jw11I227286psb+6nRC9kiBp2Pz9iZS1e6CVg7b23HxFTu355tHOwdD0W7ADYFNqGSKjb4x+rLZ9VBIoIg7Kgtbt7o2o0J9Yys1XxEVpKTbaNeUW6juVLRw69o9XMtK0xSIwF/4Qri8/yH+RySIfnH3wxmBoA7/vEFBHu23mziDfyNo/ICMLyFwrwYFHpBpy cbTzRW0Q GrJw/b6iF0ZHfBfid91dmtGkPlkHkOYIeBfnkCzgsfrpfVXZAe8hSWMSJtScGxKuYCUYqhO5LDCEBy+DYFfaWhMplI6olNkWTeP+7MVFGtVTuY/MFzAaFxYiM79HgUnxoSy358tgNgQxLTwX9wv5EtnpmZoOptoU0UtlgiN5AzTlrWafkL5+GL7QRRhhh3ON48wh3ozBkzhhxBDdz22ipHsRjmbSBlQajIVelhVdC4KobSgofV5yJwO7PQZOzCGuKGl7sMNo7KDy/EzFFwzChmRLUjkKnMqymylUxEO26IUssUHfiM3DUT87pqysbGzuof95Dvh8bU2GoC0fY+Ia7kbhTVY/b5ZxB51DckOJ8zW5V6zIKvvM2j/dazmi2/Q5mqGQI97ZEOjETtjgYZ6zIVHpL3YpE+6NdwaJabXOC4nAB+2rYmE3Wqsfh/qx3wkYwK9IaoyUmJ/sRXsqE9BdUPnl2tg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Apr 12, 2024 at 3:33=E2=80=AFAM Ryan Roberts = wrote: > > On 09/04/2024 09:26, Barry Song wrote: > > From: Chuanhua Han > > > > When a large folio is found in the swapcache, the current implementatio= n > > requires calling do_swap_page() nr_pages times, resulting in nr_pages > > page faults. This patch opts to map the entire large folio at once to > > minimize page faults. Additionally, redundant checks and early exits > > for ARM64 MTE restoring are removed. > > > > Signed-off-by: Chuanhua Han > > Co-developed-by: Barry Song > > Signed-off-by: Barry Song > > --- > > mm/memory.c | 64 +++++++++++++++++++++++++++++++++++++++++++---------- > > 1 file changed, 52 insertions(+), 12 deletions(-) > > > > diff --git a/mm/memory.c b/mm/memory.c > > index c4a52e8d740a..9818dc1893c8 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -3947,6 +3947,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > pte_t pte; > > vm_fault_t ret =3D 0; > > void *shadow =3D NULL; > > + int nr_pages =3D 1; > > + unsigned long start_address =3D vmf->address; > > + pte_t *start_pte =3D vmf->pte; > > possible bug?: there are code paths that assign to vmf-pte below in this > function, so couldn't start_pte be stale in some cases? I'd just do the > assignment (all 4 of these variables in fact) in an else clause below, af= ter any > messing about with them is complete. > > nit: rename start_pte -> start_ptep ? Agreed. > > > + bool any_swap_shared =3D false; > > Suggest you defer initialization of this to your "We hit large folios in > swapcache" block below, and init it to: > > any_swap_shared =3D !pte_swp_exclusive(vmf->pte); > > Then the any_shared semantic in swap_pte_batch() can be the same as for > folio_pte_batch(). > > > > > if (!pte_unmap_same(vmf)) > > goto out; > > @@ -4137,6 +4141,35 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > */ > > vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->addre= ss, > > &vmf->ptl); > > bug: vmf->pte may be NULL and you are not checking it until check_pte:. B= yt you > are using it in this block. It also seems odd to do all the work in the b= elow > block under the PTL but before checking if the pte has changed. Suggest m= oving > both checks here. agreed. > > > + > > + /* We hit large folios in swapcache */ > > + if (start_pte && folio_test_large(folio) && folio_test_swapcache(= folio)) { > > What's the start_pte check protecting? This is exactly protecting the case vmf->pte=3D=3DNULL but for some reason = it was assigned in the beginning of the function incorrectly. The intention of the= code was actually doing start_pte =3D vmf->pte after "vmf->pte =3D pte_offset_ma= p_lock". > > > + int nr =3D folio_nr_pages(folio); > > + int idx =3D folio_page_idx(folio, page); > > + unsigned long folio_start =3D vmf->address - idx * PAGE_S= IZE; > > + unsigned long folio_end =3D folio_start + nr * PAGE_SIZE; > > + pte_t *folio_ptep; > > + pte_t folio_pte; > > + > > + if (unlikely(folio_start < max(vmf->address & PMD_MASK, v= ma->vm_start))) > > + goto check_pte; > > + if (unlikely(folio_end > pmd_addr_end(vmf->address, vma->= vm_end))) > > + goto check_pte; > > + > > + folio_ptep =3D vmf->pte - idx; > > + folio_pte =3D ptep_get(folio_ptep); > > + if (!is_swap_pte(folio_pte) || non_swap_entry(pte_to_swp_= entry(folio_pte)) || > > + swap_pte_batch(folio_ptep, nr, folio_pte, &any_swap_s= hared) !=3D nr) > > + goto check_pte; > > + > > + start_address =3D folio_start; > > + start_pte =3D folio_ptep; > > + nr_pages =3D nr; > > + entry =3D folio->swap; > > + page =3D &folio->page; > > + } > > + > > +check_pte: > > if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig= _pte))) > > goto out_nomap; > > > > @@ -4190,6 +4223,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > */ > > exclusive =3D false; > > } > > + > > + /* Reuse the whole large folio iff all entries are exclus= ive */ > > + if (nr_pages > 1 && any_swap_shared) > > + exclusive =3D false; > > If you init any_shared with the firt pte as I suggested then you could ju= st set > exclusive =3D !any_shared at the top of this if block without needing thi= s > separate fixup. Since your swap_pte_batch() function checks that all PTEs have the same exclusive bits, I'll be removing any_shared first in version 3 per David's suggestions. We could potentially develop "any_shared" as an incremental patchset later on . > > } > > > > /* > > @@ -4204,12 +4241,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > * We're already holding a reference on the page but haven't mapp= ed it > > * yet. > > */ > > - swap_free(entry); > > + swap_free_nr(entry, nr_pages); > > if (should_try_to_free_swap(folio, vma, vmf->flags)) > > folio_free_swap(folio); > > > > - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); > > - dec_mm_counter(vma->vm_mm, MM_SWAPENTS); > > + folio_ref_add(folio, nr_pages - 1); > > + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); > > + add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages); > > + > > pte =3D mk_pte(page, vma->vm_page_prot); > > > > /* > > @@ -4219,33 +4258,34 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > * exclusivity. > > */ > > if (!folio_test_ksm(folio) && > > - (exclusive || folio_ref_count(folio) =3D=3D 1)) { > > + (exclusive || (folio_ref_count(folio) =3D=3D nr_pages && > > + folio_nr_pages(folio) =3D=3D nr_pages))) { > > if (vmf->flags & FAULT_FLAG_WRITE) { > > pte =3D maybe_mkwrite(pte_mkdirty(pte), vma); > > vmf->flags &=3D ~FAULT_FLAG_WRITE; > > } > > rmap_flags |=3D RMAP_EXCLUSIVE; > > } > > - flush_icache_page(vma, page); > > + flush_icache_pages(vma, page, nr_pages); > > if (pte_swp_soft_dirty(vmf->orig_pte)) > > pte =3D pte_mksoft_dirty(pte); > > if (pte_swp_uffd_wp(vmf->orig_pte)) > > pte =3D pte_mkuffd_wp(pte); > > I'm not sure about all this... you are smearing these SW bits from the fa= ulting > PTE across all the ptes you are mapping. Although I guess actually that's= ok > because swap_pte_batch() only returns a batch with all these bits the sam= e? Initially, I didn't recognize the issue at all because the tested architecture arm64 didn't include these bits. However, after reviewing your latest swpout seri= es, which verifies the consistent bits for soft_dirty and uffd_wp, I now feel its safety even for platforms with these bits. > > > - vmf->orig_pte =3D pte; > > Instead of doing a readback below, perhaps: > > vmf->orig_pte =3D pte_advance_pfn(pte, nr_pages); Nice ! > > > > > /* ksm created a completely new copy */ > > if (unlikely(folio !=3D swapcache && swapcache)) { > > - folio_add_new_anon_rmap(folio, vma, vmf->address); > > + folio_add_new_anon_rmap(folio, vma, start_address); > > folio_add_lru_vma(folio, vma); > > } else { > > - folio_add_anon_rmap_pte(folio, page, vma, vmf->address, > > - rmap_flags); > > + folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, star= t_address, > > + rmap_flags); > > } > > > > VM_BUG_ON(!folio_test_anon(folio) || > > (pte_write(pte) && !PageAnonExclusive(page))); > > - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); > > - arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_p= te); > > + set_ptes(vma->vm_mm, start_address, start_pte, pte, nr_pages); > > + vmf->orig_pte =3D ptep_get(vmf->pte); > > + arch_do_swap_page(vma->vm_mm, vma, start_address, pte, pte); > > > > folio_unlock(folio); > > if (folio !=3D swapcache && swapcache) { > > @@ -4269,7 +4309,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > } > > > > /* No need to invalidate - it was non-present before */ > > - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); > > + update_mmu_cache_range(vmf, vma, start_address, start_pte, nr_pag= es); > > unlock: > > if (vmf->pte) > > pte_unmap_unlock(vmf->pte, vmf->ptl); > Thanks Barry