From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32071C4345F for ; Fri, 12 Apr 2024 11:31:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A5F906B0082; Fri, 12 Apr 2024 07:31:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A0FBA6B0083; Fri, 12 Apr 2024 07:31:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8D6C26B0087; Fri, 12 Apr 2024 07:31:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 70E086B0082 for ; Fri, 12 Apr 2024 07:31:14 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2A249C0E3E for ; Fri, 12 Apr 2024 11:31:14 +0000 (UTC) X-FDA: 82000663668.04.B6CE928 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf12.hostedemail.com (Postfix) with ESMTP id 6FFE640019 for ; Fri, 12 Apr 2024 11:31:12 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712921472; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Hv2McFYS0XyWpxcuisvPLWKItRrqxwKBtet26L5nYgc=; b=elzf4LzvcYjdnvvuLqMyf7ZMozXmpQhK+VVRfu8In9OHt3FdO0xT9hFzaVynzlwqwWOCoV uuC31+h+ory+Lig+tDzeWHbsPe97kCy8YHNaZfn+0s5aVni3ILIdpUAGmOtxQhj1udLtZ7 uBEkyvs58xs0pGT4HbNfEHvMBsuddmo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712921472; a=rsa-sha256; cv=none; b=aWn6nm7a34xesOSU75rc53RUOzi2OAxsAgZ0/9fgtVWwMeKTwzhLzsArkX6r7OaDbXaiyO xJUAcusjNYIu+Jj5j6UrRgVbF2rEv6oYdQM1hYcXfLv3CkPApDm2/rKbddZLJe2Q1wU5BP w7J3Kk/lG1+aTu0HCAddg8yu7/mCi5Q= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 149B7339; Fri, 12 Apr 2024 04:31:41 -0700 (PDT) Received: from [10.57.73.208] (unknown [10.57.73.208]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2B57F3F766; Fri, 12 Apr 2024 04:31:09 -0700 (PDT) Message-ID: <2f6cc7a4-ea64-40aa-842a-8d85309a5cbd@arm.com> Date: Fri, 12 Apr 2024 12:31:07 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 4/5] mm: swap: entirely map large folios found in swapcache Content-Language: en-GB To: Barry Song <21cnbao@gmail.com> Cc: akpm@linux-foundation.org, linux-mm@kvack.org, baolin.wang@linux.alibaba.com, chrisl@kernel.org, david@redhat.com, hanchuanhua@oppo.com, hannes@cmpxchg.org, hughd@google.com, kasong@tencent.com, surenb@google.com, v-songbaohua@oppo.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yosryahmed@google.com, yuzhao@google.com, ziy@nvidia.com, linux-kernel@vger.kernel.org References: <20240409082631.187483-1-21cnbao@gmail.com> <20240409082631.187483-5-21cnbao@gmail.com> <1008d688-757a-4c2d-86bd-793f5e787d30@arm.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Stat-Signature: cuci8oz3fmz9txfmkmf7mep8i1y4dmtd X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 6FFE640019 X-Rspam-User: X-HE-Tag: 1712921472-903867 X-HE-Meta: U2FsdGVkX1/fM7NPLj9OfmES1FqMhTBOCgcxRe3MgisCrpQeJ77ae/xgl9T/j5nNNuevJp8pYsXMwFu3FHc1bggIougYxt30oAStQqhmjPgaHKRQToD8VeS8zCBvjxYejCapy+G1kUAJCBwP24ltvg/GwVytRmTj05TUdNkW8D8sf25G8viYEGeZq5aK08JQv1fqf6tG4YoCRB6c19TZJhsohv86zZuPZWiBhxbRIq4ojCcQrFbHr4lob1BMr9mVdvXHR/5FS8iL5/apejfygKKIV+0uzapq+9/3YfkLplZvyqduUSt5aj5PZ/GJqUxwW1Khkbl2NF0/elvZqp9MbUJwh3T1ooY19/BT+fRs5Y/BE2XlLfV4JbT9yKeimWVhuHSjp8bSAjnY7RGuYgQvKR1xC4mIg+decF5n/H1Zh+ZuE/1iV+YVxhA+4JTAyCvrM4kPqD6nykzvqwlpNaETD4OwQ8R/iY8rqTXmf4gqlem9kIxyquvuFV0WZCDHCyh2jAhHBEHsxikPKoEl4HbrY5//G5gvh+r3Fl7GR58X+yYkw+uDvcHLzST7B0wfkC9f5sCgUBguVYb2gb+Eh3gvkYH7Ltr+xdL6R/GBV48woDOWs2wem5ip1sZD1V8RaXPUtsBJZmLR8WYeQ6p7eP7twm0/PiHkZ5aMctsc7r1tzV/HQ7Xwz7CZQFUvV9zIh9PgloGJ6vkkPC0zFsVALFVUxFXdlOvdK3g3zDA19CnmVWO9D19qkynFYt7CdUXLXJ5tvrVl8RsPXiWkYJCa0inlOqlxv94LsQCDGBwBsUe5BGVdZXso8efW/CTRkpTdpVc4n9ABoy2zIXtLYXptrrFvS4SAv7kOcN4Lx6KhELapWQ/zp6bRwNehwfiOeiKyq+1tGTCOWJ48kfqEA+AT22iVzCKCDSGHibOMWSqyibSrrZB1kiV1YEw7qknrfjpT5AdwmrHzvHBaZH1Wq1Arf5u SahaeJOT JlXPh4pweyNinpusWFJ9vc3+ksPBWiAuIs2T+il+PSBG8c4KfpnMnE/vK5r3Ev4q4GyD34EfrJ9MyMinldpgKtZLVZr984Y4GMynr8WBp0S1aVdjq3SacOnhUKhddOEVRA/ekJwJtgeAOZBHqJ6LR/GzIBV5o7X6I7cqQuU9WJlJDamFmU+9xaKq/l4HcGtxUPTjaDeIQgGhqQnrPj0Mvcvt4e5GKG89Ws0QAOxTtQEhyepziRxGTa7wrkOQRD79le3YTOKrXaAUng7o1hilW/D9Q19gURq6pYKHj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 12/04/2024 00:30, Barry Song wrote: > On Fri, Apr 12, 2024 at 3:33 AM Ryan Roberts wrote: >> >> On 09/04/2024 09:26, Barry Song wrote: >>> From: Chuanhua Han >>> >>> When a large folio is found in the swapcache, the current implementation >>> requires calling do_swap_page() nr_pages times, resulting in nr_pages >>> page faults. This patch opts to map the entire large folio at once to >>> minimize page faults. Additionally, redundant checks and early exits >>> for ARM64 MTE restoring are removed. >>> >>> Signed-off-by: Chuanhua Han >>> Co-developed-by: Barry Song >>> Signed-off-by: Barry Song >>> --- >>> mm/memory.c | 64 +++++++++++++++++++++++++++++++++++++++++++---------- >>> 1 file changed, 52 insertions(+), 12 deletions(-) >>> >>> diff --git a/mm/memory.c b/mm/memory.c >>> index c4a52e8d740a..9818dc1893c8 100644 >>> --- a/mm/memory.c >>> +++ b/mm/memory.c >>> @@ -3947,6 +3947,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>> pte_t pte; >>> vm_fault_t ret = 0; >>> void *shadow = NULL; >>> + int nr_pages = 1; >>> + unsigned long start_address = vmf->address; >>> + pte_t *start_pte = vmf->pte; >> >> possible bug?: there are code paths that assign to vmf-pte below in this >> function, so couldn't start_pte be stale in some cases? I'd just do the >> assignment (all 4 of these variables in fact) in an else clause below, after any >> messing about with them is complete. >> >> nit: rename start_pte -> start_ptep ? > > Agreed. > >> >>> + bool any_swap_shared = false; >> >> Suggest you defer initialization of this to your "We hit large folios in >> swapcache" block below, and init it to: >> >> any_swap_shared = !pte_swp_exclusive(vmf->pte); >> >> Then the any_shared semantic in swap_pte_batch() can be the same as for >> folio_pte_batch(). >> >>> >>> if (!pte_unmap_same(vmf)) >>> goto out; >>> @@ -4137,6 +4141,35 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>> */ >>> vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, >>> &vmf->ptl); >> >> bug: vmf->pte may be NULL and you are not checking it until check_pte:. Byt you >> are using it in this block. It also seems odd to do all the work in the below >> block under the PTL but before checking if the pte has changed. Suggest moving >> both checks here. > > agreed. > >> >>> + >>> + /* We hit large folios in swapcache */ >>> + if (start_pte && folio_test_large(folio) && folio_test_swapcache(folio)) { >> >> What's the start_pte check protecting? > > This is exactly protecting the case vmf->pte==NULL but for some reason it was > assigned in the beginning of the function incorrectly. The intention of the code > was actually doing start_pte = vmf->pte after "vmf->pte = pte_offset_map_lock". > >> >>> + int nr = folio_nr_pages(folio); >>> + int idx = folio_page_idx(folio, page); >>> + unsigned long folio_start = vmf->address - idx * PAGE_SIZE; >>> + unsigned long folio_end = folio_start + nr * PAGE_SIZE; >>> + pte_t *folio_ptep; >>> + pte_t folio_pte; >>> + >>> + if (unlikely(folio_start < max(vmf->address & PMD_MASK, vma->vm_start))) >>> + goto check_pte; >>> + if (unlikely(folio_end > pmd_addr_end(vmf->address, vma->vm_end))) >>> + goto check_pte; >>> + >>> + folio_ptep = vmf->pte - idx; >>> + folio_pte = ptep_get(folio_ptep); >>> + if (!is_swap_pte(folio_pte) || non_swap_entry(pte_to_swp_entry(folio_pte)) || >>> + swap_pte_batch(folio_ptep, nr, folio_pte, &any_swap_shared) != nr) >>> + goto check_pte; >>> + >>> + start_address = folio_start; >>> + start_pte = folio_ptep; >>> + nr_pages = nr; >>> + entry = folio->swap; >>> + page = &folio->page; >>> + } >>> + >>> +check_pte: >>> if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) >>> goto out_nomap; >>> >>> @@ -4190,6 +4223,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>> */ >>> exclusive = false; >>> } >>> + >>> + /* Reuse the whole large folio iff all entries are exclusive */ >>> + if (nr_pages > 1 && any_swap_shared) >>> + exclusive = false; >> >> If you init any_shared with the firt pte as I suggested then you could just set >> exclusive = !any_shared at the top of this if block without needing this >> separate fixup. > > Since your swap_pte_batch() function checks that all PTEs have the same > exclusive bits, I'll be removing any_shared first in version 3 per David's > suggestions. We could potentially develop "any_shared" as an incremental > patchset later on . Ahh yes, good point. I'll admit that your conversation about this went over my head at the time since I hadn't yet looked at this. > >>> } >>> >>> /* >>> @@ -4204,12 +4241,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>> * We're already holding a reference on the page but haven't mapped it >>> * yet. >>> */ >>> - swap_free(entry); >>> + swap_free_nr(entry, nr_pages); >>> if (should_try_to_free_swap(folio, vma, vmf->flags)) >>> folio_free_swap(folio); >>> >>> - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); >>> - dec_mm_counter(vma->vm_mm, MM_SWAPENTS); >>> + folio_ref_add(folio, nr_pages - 1); >>> + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); >>> + add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages); >>> + >>> pte = mk_pte(page, vma->vm_page_prot); >>> >>> /* >>> @@ -4219,33 +4258,34 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>> * exclusivity. >>> */ >>> if (!folio_test_ksm(folio) && >>> - (exclusive || folio_ref_count(folio) == 1)) { >>> + (exclusive || (folio_ref_count(folio) == nr_pages && >>> + folio_nr_pages(folio) == nr_pages))) { >>> if (vmf->flags & FAULT_FLAG_WRITE) { >>> pte = maybe_mkwrite(pte_mkdirty(pte), vma); >>> vmf->flags &= ~FAULT_FLAG_WRITE; >>> } >>> rmap_flags |= RMAP_EXCLUSIVE; >>> } >>> - flush_icache_page(vma, page); >>> + flush_icache_pages(vma, page, nr_pages); >>> if (pte_swp_soft_dirty(vmf->orig_pte)) >>> pte = pte_mksoft_dirty(pte); >>> if (pte_swp_uffd_wp(vmf->orig_pte)) >>> pte = pte_mkuffd_wp(pte); >> >> I'm not sure about all this... you are smearing these SW bits from the faulting >> PTE across all the ptes you are mapping. Although I guess actually that's ok >> because swap_pte_batch() only returns a batch with all these bits the same? > > Initially, I didn't recognize the issue at all because the tested > architecture arm64 > didn't include these bits. However, after reviewing your latest swpout series, > which verifies the consistent bits for soft_dirty and uffd_wp, I now > feel its safety > even for platforms with these bits. Yep, agreed. > >> >>> - vmf->orig_pte = pte; >> >> Instead of doing a readback below, perhaps: >> >> vmf->orig_pte = pte_advance_pfn(pte, nr_pages); > > Nice ! > >> >>> >>> /* ksm created a completely new copy */ >>> if (unlikely(folio != swapcache && swapcache)) { >>> - folio_add_new_anon_rmap(folio, vma, vmf->address); >>> + folio_add_new_anon_rmap(folio, vma, start_address); >>> folio_add_lru_vma(folio, vma); >>> } else { >>> - folio_add_anon_rmap_pte(folio, page, vma, vmf->address, >>> - rmap_flags); >>> + folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, start_address, >>> + rmap_flags); >>> } >>> >>> VM_BUG_ON(!folio_test_anon(folio) || >>> (pte_write(pte) && !PageAnonExclusive(page))); >>> - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); >>> - arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte); >>> + set_ptes(vma->vm_mm, start_address, start_pte, pte, nr_pages); >>> + vmf->orig_pte = ptep_get(vmf->pte); >>> + arch_do_swap_page(vma->vm_mm, vma, start_address, pte, pte); >>> >>> folio_unlock(folio); >>> if (folio != swapcache && swapcache) { >>> @@ -4269,7 +4309,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>> } >>> >>> /* No need to invalidate - it was non-present before */ >>> - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); >>> + update_mmu_cache_range(vmf, vma, start_address, start_pte, nr_pages); >>> unlock: >>> if (vmf->pte) >>> pte_unmap_unlock(vmf->pte, vmf->ptl); >> > > Thanks > Barry