From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A38AAC4345F for ; Fri, 3 May 2024 23:23:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 27B746B0083; Fri, 3 May 2024 19:23:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 22B136B0085; Fri, 3 May 2024 19:23:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F3A76B0087; Fri, 3 May 2024 19:23:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id E3C716B0083 for ; Fri, 3 May 2024 19:23:49 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 548FC121270 for ; Fri, 3 May 2024 23:23:49 +0000 (UTC) X-FDA: 82078664178.15.7F8AAC3 Received: from mail-ua1-f53.google.com (mail-ua1-f53.google.com [209.85.222.53]) by imf01.hostedemail.com (Postfix) with ESMTP id 893AE40003 for ; Fri, 3 May 2024 23:23:47 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Slva+PN1; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.53 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1714778627; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=j6H3VTScbmcVWBWXc10KvGJjTYbdsx8It8h7yZ/13J0=; b=D8To9fXZdyiF+FTXjXAiXBYX4nBw6WcNklxRo2L182tYt3bx3KnBYFKCaksAmkQDjozOAo ikHbiRFM0zgo3e9tom/ifHeGwC8q8O+9xPUCiHipPBOsb46fyYnlWAF6k+0kAaeV8ahdpw lQl9UKshehBKFdYEppN/2RGUNs96i+E= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Slva+PN1; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.53 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1714778627; a=rsa-sha256; cv=none; b=k7j67ouUwiB+MwWuIOu75LSsWjlBYo2dCiPBmrG6i2yewPtfYyoL+rvcrZvQhwiO8d1XcH cQFSanr5F1zqSQii7ed3IVDBgsy0cXlhIu9x82ev27RfNiD6PYvnhGGEkwKEt3epqbQMca QjCmnhS8VVPVENhxN+7jOUE5GatvA6Y= Received: by mail-ua1-f53.google.com with SMTP id a1e0cc1a2514c-7f3ff632a51so1133932241.1 for ; Fri, 03 May 2024 16:23:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1714778626; x=1715383426; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=j6H3VTScbmcVWBWXc10KvGJjTYbdsx8It8h7yZ/13J0=; b=Slva+PN1o3c209RAzWaLOsquuOH/1x2wgpiyq8tyryHXfN56kPdaGY9LTfFPT9fKXB pcPy/0MI9vbNogjLKrJvhqRosHG0rN9WyUXJWB7ZH1xMJ5pUq9N2BfvN/Fx5sJr3V7fF FSYmt0xpRfIZnH5mWFHTivzaGeKdRnUfm3j+nWiQCCTFXSqs3bdWIflJp0qLKLaYXLb8 wwqNvQwOybYCT4PQmJ8J2Cp++zPnyHHDuCcjkDFXCFM/IrRSia9V5FLvS8lnZKp6P6yd I0/dU5HvDnDQ9muFKPyKxsM50ZEMfsB6Er5RL2r7xdl8Dj0re9/4W1jS+JwXskUj22Pt +tjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714778626; x=1715383426; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=j6H3VTScbmcVWBWXc10KvGJjTYbdsx8It8h7yZ/13J0=; b=ei6xLBviAGsnu4wPgU7UEiooZFN0DbNhG6KGiNYigqIVOnF9jO3dKQOVjbLrkbh+bI ykdRVDoAgfMrXZpJdO3h1c1zHYUUzydBj8LM0Xx0gM1RMlvnDCuMYoE38D5ETYnBuafV NyNb5iZJdRBJzZgNXCSJrheDMLTEsJEIEBwIv31cUCHBdbbF/TeFx6Q1D6umB7zj3MsG 5HR7t7JpAM12GElM6beq74B7rbpKFVwuqRB5Rgrp10IgjMLTnFlendq6skE2UgJAH76n qy3Ih1qBk+3XvfwcHlYmy5sJ0PMfLwU/qVzqkr5WOa7+jPFG1edi+3WcdUpN3bUk+lD0 4BIg== X-Forwarded-Encrypted: i=1; AJvYcCU3IPxtO121yL+Tth+YdUtBR/qDTOj6Yw1tMbwkw9zinXkkZZpWSlSAmd2P+TpKQrNQrtuT8dYeL7EIq+sB21pUipg= X-Gm-Message-State: AOJu0YyyRZcx1PIBRRnsPE/c6EfM2f/wPFR0H+hVb1MtqGP6Wut8uuso 5xF76jSxd8A++DMF9r3xaANm00prJ/vpwQ50U2i9WKvsCAswZTjMAntAJq/g0eB3P1Q8lvVRKEA 0ZpB+4oNB4yzmxXzpTvuRuzjArfM= X-Google-Smtp-Source: AGHT+IG+AXjs1RQlhTgaI/RXMSpA48vwAUgsjbX2UUARv0h3jAYddD1sr3GQUldKCmr4+7Wo8TjXACVTAx4PS0N1epw= X-Received: by 2002:a05:6122:1da1:b0:4d8:7d49:18fe with SMTP id gg33-20020a0561221da100b004d87d4918femr10218903vkb.4.1714778626450; Fri, 03 May 2024 16:23:46 -0700 (PDT) MIME-Version: 1.0 References: <20240503005023.174597-1-21cnbao@gmail.com> <20240503005023.174597-7-21cnbao@gmail.com> <0b4d4d4b-91d8-4fd5-af4e-aebe9ee08b89@arm.com> In-Reply-To: <0b4d4d4b-91d8-4fd5-af4e-aebe9ee08b89@arm.com> From: Barry Song <21cnbao@gmail.com> Date: Sat, 4 May 2024 07:23:35 +0800 Message-ID: Subject: Re: [PATCH v3 6/6] mm: swap: entirely map large folios found in swapcache To: Ryan Roberts Cc: akpm@linux-foundation.org, linux-mm@kvack.org, baolin.wang@linux.alibaba.com, chrisl@kernel.org, david@redhat.com, hanchuanhua@oppo.com, hannes@cmpxchg.org, hughd@google.com, kasong@tencent.com, linux-kernel@vger.kernel.org, surenb@google.com, v-songbaohua@oppo.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yosryahmed@google.com, yuzhao@google.com, ziy@nvidia.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 893AE40003 X-Stat-Signature: bw9u13yxoheap8jwgquzgdwt6ps5hkam X-HE-Tag: 1714778627-860226 X-HE-Meta: U2FsdGVkX19gNnaDYMkKdb73q1/spU8drgv4o7xQ1lS+lcxEG6R392N973TXDC+qtPE1AVv0asEXA0VA+rHshX/EQ99RSvblWIE65zBL8S5jlp/WDBEgqHatBrb7Dl1dGZrme5WUx00dz4Prk9Z3ePRrOMFMA8fz/6DNamX3rpb4AahbJ3s/7s8sZKgPYANdQuWMVgTXQaqM/5Yx9SxIVTrt+8B83pMytAVBUsk+rIawTt60L0+joqwI/mtqA/NpzlkIWmYKDaTNyj03CLKF+um/VcoPyxRDtucocq4yivoFVbTqNZ8RzMZtNoUEd2eI0BcGf92TcVBiLkSETTlwCBj7FjHttQVrCVO0Bkaa8pKhhK5uLv0bbm45Unt0avqyCfD1ktEeczS8pvdRKd6ScAbFYMryO0mwg+qgvfOUNSjdA4E5504KJCtD6bL6rMZT5IxvQoRDCAVSR1vWeDWzXUMmyajVSVthehkHDD+E1kA44jNIH70MiTiRSeVITiraK0xhX9Xwmh7Qfgq/aOXX5kKNbWxf5ZK3v1Kedb1WVgiChjR4od0276MkspaEtJTi7eVGKP52BECd0mOb1vkxiwmjDNP/tuTGWN018cV4zyHUXQ5xQv03TkZy28ECqJnTIbLrO3ZLFBIqe6uirY/8VIgQ0VvEziEDeHqnGSpIdEqanLXL1QrDUxEo300fFOXn393GNUiIE+u0SaLcZKUxAz8PVVrVAqfrp7xcfzHFVZpCQrOztoTcJdlLWu2pQq6MIQoNhUpips41GXA6TZWQUlb1VzdK6rFw7mygAOYWuozN6d4mqV+NRTilzjsf2PmZviIEytg2cqyhooy6XM66ehgKiHK33YElHyU7gPlDYHaEX3QAsD/xgu0CywCE0BQXp21RczBcTzGHRoO5w4LRD2MwrRAQPqa1F8QW5EZBm0thwverrniH98/PyI9oJjWXSSGLEa2+G3ia0XgNDgE cZzaubbI PlVkmR1wbJypxztowAgmIYQEKA+8RWHgXPicx8zyYvRJBatkKhRiz9omM3MRzlRwxJpPkcFrtiPFHAt4i0HmoMvNJJYO6Cnej4dckXTwm8YjKctn562+/kw2tTiVDuIMemrLkw0tiPhceHaaSwcl01T10glQly8UNGSxXrONrLx0ELp9vVM2Yg96C/ByKqKJ0hazmutQnmm3O4VMHAiSFmu1J4TAyJCTN+Au0OJ85vW2irZez22vYNIplxTEccj1NLaGwKYf6UmCzxFa3kwyQDOLGiJ2/SwrPBLvCf5oVGzF8n0gXy27xFVV+7bLHs065YwymGYKjcs0ZnqEmryHm2bECluWvRe8bJOUzaYyvFl6+oN7uIrNx6bhlELnXNtHyicIPB5N4ZViilQB1BeymLKzMqXG2aHrdvS0IjVQp5rMzFS1QKAKXoEK4VxQQH7GetQwDMNvQm6OzXbMm58XzaiMkhA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, May 3, 2024 at 6:50=E2=80=AFPM Ryan Roberts = wrote: > > On 03/05/2024 01:50, Barry Song wrote: > > From: Chuanhua Han > > > > When a large folio is found in the swapcache, the current implementatio= n > > requires calling do_swap_page() nr_pages times, resulting in nr_pages > > page faults. This patch opts to map the entire large folio at once to > > minimize page faults. Additionally, redundant checks and early exits > > for ARM64 MTE restoring are removed. > > > > Signed-off-by: Chuanhua Han > > Co-developed-by: Barry Song > > Signed-off-by: Barry Song > > With the suggested changes below: > > Reviewed-by: Ryan Roberts > > > --- > > mm/memory.c | 60 ++++++++++++++++++++++++++++++++++++++++++----------- > > 1 file changed, 48 insertions(+), 12 deletions(-) > > > > diff --git a/mm/memory.c b/mm/memory.c > > index 22e7c33cc747..940fdbe69fa1 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -3968,6 +3968,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > pte_t pte; > > vm_fault_t ret =3D 0; > > void *shadow =3D NULL; > > + int nr_pages =3D 1; > > + unsigned long page_idx =3D 0; > > + unsigned long address =3D vmf->address; > > + pte_t *ptep; > > nit: Personally I'd prefer all these to get initialised just before the "= if > (folio_test_large()..." block below. That way it is clear they are fresh = (incase > any logic between here and there makes an adjustment) and its clear that = they > are only to be used after that block (the compiler will warn if using an > uninitialized value). right. I agree this will make the code more readable. > > > > > if (!pte_unmap_same(vmf)) > > goto out; > > @@ -4166,6 +4170,36 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > goto out_nomap; > > } > > > > + ptep =3D vmf->pte; > > + if (folio_test_large(folio) && folio_test_swapcache(folio)) { > > + int nr =3D folio_nr_pages(folio); > > + unsigned long idx =3D folio_page_idx(folio, page); > > + unsigned long folio_start =3D vmf->address - idx * PAGE_S= IZE; > > + unsigned long folio_end =3D folio_start + nr * PAGE_SIZE; > > + pte_t *folio_ptep; > > + pte_t folio_pte; > > + > > + if (unlikely(folio_start < max(vmf->address & PMD_MASK, v= ma->vm_start))) > > + goto check_folio; > > + if (unlikely(folio_end > pmd_addr_end(vmf->address, vma->= vm_end))) > > + goto check_folio; > > + > > + folio_ptep =3D vmf->pte - idx; > > + folio_pte =3D ptep_get(folio_ptep); > > + if (!pte_same(folio_pte, pte_move_swp_offset(vmf->orig_pt= e, -idx)) || > > + swap_pte_batch(folio_ptep, nr, folio_pte) !=3D nr) > > + goto check_folio; > > + > > + page_idx =3D idx; > > + address =3D folio_start; > > + ptep =3D folio_ptep; > > + nr_pages =3D nr; > > + entry =3D folio->swap; > > + page =3D &folio->page; > > + } > > + > > +check_folio: > > Is this still the correct label name, given the checks are now above the = new > block? Perhaps "one_page" or something like that? not quite sure about this, as the code after one_page can be multiple_pages= . On the other hand, it seems we are really checking folio after "check_folio= " :-) BUG_ON(!folio_test_anon(folio) && folio_test_mappedtodisk(folio)); BUG_ON(folio_test_anon(folio) && PageAnonExclusive(page)); /* * Check under PT lock (to protect against concurrent fork() sharing * the swap entry concurrently) for certainly exclusive pages. */ if (!folio_test_ksm(folio)) { > > > + > > /* > > * PG_anon_exclusive reuses PG_mappedtodisk for anon pages. A swa= p pte > > * must never point at an anonymous page in the swapcache that is > > @@ -4225,12 +4259,13 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > * We're already holding a reference on the page but haven't mapp= ed it > > * yet. > > */ > > - swap_free_nr(entry, 1); > > + swap_free_nr(entry, nr_pages); > > if (should_try_to_free_swap(folio, vma, vmf->flags)) > > folio_free_swap(folio); > > > > - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); > > - dec_mm_counter(vma->vm_mm, MM_SWAPENTS); > > + folio_ref_add(folio, nr_pages - 1); > > + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); > > + add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages); > > pte =3D mk_pte(page, vma->vm_page_prot); > > > > /* > > @@ -4240,34 +4275,35 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > * exclusivity. > > */ > > if (!folio_test_ksm(folio) && > > - (exclusive || folio_ref_count(folio) =3D=3D 1)) { > > + (exclusive || (folio_ref_count(folio) =3D=3D nr_pages && > > + folio_nr_pages(folio) =3D=3D nr_pages))) { > > I think in practice there is no change here? If nr_pages > 1 then the fol= io is > in the swapcache, so there is an extra ref on it? I agree with the change= for > robustness sake. Just checking my understanding. This is the code showing we are reusing/(mkwrite) a folio either 1. we meet a small folio and we are the only one hitting the small folio 2. we meet a large folio and we are the only one hitting the large folio any corner cases besides the above two seems difficult. for example, while we hit a large folio in swapcache but if we can't entirely map it (nr_pages=3D=3D1) due to partial unmap, we will have folio_ref_count(folio) =3D=3D nr_pages =3D=3D 1, in this case, lacking folio_nr_pages(folio) =3D= =3D nr_pages might lead to mkwrite() on a single pte within a partially unmapped large folio. not quite sure this is wrong, but seems buggy and arduous. > > > if (vmf->flags & FAULT_FLAG_WRITE) { > > pte =3D maybe_mkwrite(pte_mkdirty(pte), vma); > > vmf->flags &=3D ~FAULT_FLAG_WRITE; > > } > > rmap_flags |=3D RMAP_EXCLUSIVE; > > } > > - flush_icache_page(vma, page); > > + flush_icache_pages(vma, page, nr_pages); > > if (pte_swp_soft_dirty(vmf->orig_pte)) > > pte =3D pte_mksoft_dirty(pte); > > if (pte_swp_uffd_wp(vmf->orig_pte)) > > pte =3D pte_mkuffd_wp(pte); > > - vmf->orig_pte =3D pte; > > + vmf->orig_pte =3D pte_advance_pfn(pte, page_idx); > > > > /* ksm created a completely new copy */ > > if (unlikely(folio !=3D swapcache && swapcache)) { > > - folio_add_new_anon_rmap(folio, vma, vmf->address); > > + folio_add_new_anon_rmap(folio, vma, address); > > folio_add_lru_vma(folio, vma); > > } else { > > - folio_add_anon_rmap_pte(folio, page, vma, vmf->address, > > + folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, addr= ess, > > rmap_flags); > > } > > > > VM_BUG_ON(!folio_test_anon(folio) || > > (pte_write(pte) && !PageAnonExclusive(page))); > > - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); > > - arch_do_swap_page_nr(vma->vm_mm, vma, vmf->address, > > - pte, vmf->orig_pte, 1); > > + set_ptes(vma->vm_mm, address, ptep, pte, nr_pages); > > + arch_do_swap_page_nr(vma->vm_mm, vma, address, > > + pte, pte, nr_pages); > > > > folio_unlock(folio); > > if (folio !=3D swapcache && swapcache) { > > @@ -4291,7 +4327,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > } > > > > /* No need to invalidate - it was non-present before */ > > - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); > > + update_mmu_cache_range(vmf, vma, address, ptep, nr_pages); > > unlock: > > if (vmf->pte) > > pte_unmap_unlock(vmf->pte, vmf->ptl); > Thanks Barry