From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99928C10F1A for ; Thu, 9 May 2024 07:46:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 275D56B007B; Thu, 9 May 2024 03:46:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2272C6B0082; Thu, 9 May 2024 03:46:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0EE7A6B0083; Thu, 9 May 2024 03:46:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E52766B007B for ; Thu, 9 May 2024 03:46:27 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 69E80161406 for ; Thu, 9 May 2024 07:46:27 +0000 (UTC) X-FDA: 82098074814.14.7F6EAF3 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by imf15.hostedemail.com (Postfix) with ESMTP id 1C042A000E for ; Thu, 9 May 2024 07:46:23 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=TmJFEynq; spf=pass (imf15.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715240784; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=prr5AdSZD7Nt4yIz1C6trGkiTLCXScToA8LmWaYt00c=; b=wDSkPHNI+awrLrKISwvsNRsms8pNjRYJnLiMKdTVL7/9JAW/c307c/p7qlMjYV9KvugTQ5 Ny9k7PYa09P83SG5+lgprlgc2jHcSTOgHro2NQHxUXAXkltwrsEUdyWjrD73qcSflHBNzD wI91eBaGx8Dfj/Z71IQIl43SA+uGFj8= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=TmJFEynq; spf=pass (imf15.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715240784; a=rsa-sha256; cv=none; b=mMeSiB83wX3ec3F1oJ57+ABNVIebXHwJnOBsvO3MNUqaab1kmWsLgxQFtIECmlU4335DIh YrXEEUH7LgtZTtWWE5Xsf85Sm0uGEWh13BxNnnUoID6LCZiR3OxaPZbZWHDFpl2FlotHbb h/93+iz8K2SgAXaVZXXtIf0ygA/5Axg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1715240784; x=1746776784; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=/VBJ95M2e7DnAFoUQlLHIXvBcw30/0yt8Gc1O0n40a0=; b=TmJFEynqLbh20RGYakH+GDEWWuQb3ew5/nT1avkGmrfZuDTalOucRVkT UqGhGj/6XaQuL9uefWGLG5wl+heHe3An1o3yyBi/hvk0uRae51o1QLrVA KpeZCqbG4ZW2/uHkNaHB2noPgR7nRgl96WlnvDzNXGJeiDIXps6Go8Do7 GYFDfq3g+K5IFGojVhxnuwAFezyDq+ns/8mfebTLEayYfChrx4pIXbOCK jDppYTNVoE+2tW4gvOdEhbTzGt4OS1/WOXwQul7n2tVZZ6GpQ3Z9kA/SD N2NqB28kYstf82TcOS+o8hDUnrq29R70q4FQWT1i6IEstplnB91SyVnvM Q==; X-CSE-ConnectionGUID: hk0CMMz5Q4O7DkRr+sTQGg== X-CSE-MsgGUID: mKSbxQRIQfe6S1bcNys5ng== X-IronPort-AV: E=McAfee;i="6600,9927,11067"; a="14954298" X-IronPort-AV: E=Sophos;i="6.08,147,1712646000"; d="scan'208";a="14954298" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2024 00:46:23 -0700 X-CSE-ConnectionGUID: UkjHkjEBRiueIdo9SYg/Dw== X-CSE-MsgGUID: ZwqJDB7NROywee5DOBY9xg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,147,1712646000"; d="scan'208";a="29720793" Received: from unknown (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2024 00:46:18 -0700 From: "Huang, Ying" To: Barry Song <21cnbao@gmail.com> Cc: akpm@linux-foundation.org, linux-mm@kvack.org, baolin.wang@linux.alibaba.com, chrisl@kernel.org, david@redhat.com, hanchuanhua@oppo.com, hannes@cmpxchg.org, hughd@google.com, kasong@tencent.com, linux-kernel@vger.kernel.org, ryan.roberts@arm.com, surenb@google.com, v-songbaohua@oppo.com, willy@infradead.org, xiang@kernel.org, yosryahmed@google.com, yuzhao@google.com, ziy@nvidia.com Subject: Re: [PATCH v4 6/6] mm: swap: entirely map large folios found in swapcache In-Reply-To: <20240508224040.190469-7-21cnbao@gmail.com> (Barry Song's message of "Thu, 9 May 2024 10:40:40 +1200") References: <20240508224040.190469-1-21cnbao@gmail.com> <20240508224040.190469-7-21cnbao@gmail.com> Date: Thu, 09 May 2024 15:44:25 +0800 Message-ID: <875xvnig4m.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Stat-Signature: wkc8bdukqmyjzfo834to89xc6exd8odq X-Rspamd-Queue-Id: 1C042A000E X-Rspamd-Server: rspam10 X-Rspam-User: X-HE-Tag: 1715240783-654934 X-HE-Meta: U2FsdGVkX19czV2qhCsqCxVQfGcTYk0e7IcICnTHWVJ+dpfWWqA9hvyCtc1vTsS2SnuqoBmlwWEzJLqhLNirTvJRGpe88ywWJI2MjBp9sRfOn1h8I8M8406qliTDoDC0PD7IFCO7awcZIo7seDs/xQGX61KKqw9h0948wdv6eK3xq0w3I26YFRL04vvV2q3tYdlNxL9l6K9DvMFHr8XdGDjd9C/XHhKvAUN9JtogQswZSHGPKcQfKpjQs5H2c5yL9WlK8eSXKw3EzaoCnXA1PuuWtP0qguE4OpKnk3j99rHf0STYBWQjKJzJ66CODdG1X3V0OnwVphmUkLXOZiIKx29QfrOKlFgMTWdqhwnBgQUuSZOIBCt1+HAYveKM5vJ0A6GOy3eU1xepuuUp7GhWZ6lhAF2JIfJ8jiTKghMFewsiNEsU8YqpC9Or5MIaHJ5D7tMcvto/kUC95gZzTztZC8q6A/Au51rKcQpNvipgTkty53atWDQsRIqZKFXNG/6hLteDVm/qKZPkfDOiCgo3KZXz2lMrUZ+2xUwFQIm+7Dx0DFy9/EVSn6PGBftjB8UXWoBIaplEqi6qjaKTSsFilMhW7OpcApmh4oY/hmYv/E9QKYGEs1iQwu4rU1c0jk5G4zIKBLSIkh9gGpQRLgbi80wATwuyUKYWZrMcKxGjg3hmfhen/BN3s4c0loxdbHipQnfwD1QTmplgQsDodPCMuHYjjgCFY1NdbxpDPkCxlN6QEGr3ukD+zlYxaz+jPor86twuFRoZFo4VzwtCe6rFHb53bUat1vNQZMX57na/Ow+VMC6iouCBUaV4t0iXH/HlJ81j53ykunzQ0sgUibnKtsZHercaLsW+nwJpz4Tj2NtFo7FpNqGaEGXz8xYBVqwtYameQVKoyAj+aO3VwpcZYQQi2pyqCgiMoS13JxnLEvME1SPVK9KoeEKnLSQBgt+vZp3YHY2WxlJujO4uPmy ZwW6wneB aalrrbRQ7pQYBWW/d7V9XiSB0jCzK8ZRW1A4E7DhdBCRlXg4+G13DHlIQI0bDt9B1pFI0zciqcmL9kSfE/B/+phA1fR8H+kdKs/vWOiJnUe/8/0JGP8PFw9x8mgaTqo6lGos44adVZp0o3KK3f7tRTwjxCj4FsL5Z9WDkxrCX7me0/5a7WC739l+VT3uW5IJII2xWCfANi22P6fh25Zachn/+DVFTxM4ZzhMALC7xPF7NxTyMkc/8E9bKzeh07gDObXcVNJAVFGeGgvSE0lId4zR/NHTQGq3jKWJSCA57gctiNU/a8te5pWs6AxNSkaXzNsA0aJLuynMjd5nHAQW7W4iKdQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Barry Song <21cnbao@gmail.com> writes: > From: Chuanhua Han > > When a large folio is found in the swapcache, the current implementation > requires calling do_swap_page() nr_pages times, resulting in nr_pages > page faults. This patch opts to map the entire large folio at once to > minimize page faults. Additionally, redundant checks and early exits > for ARM64 MTE restoring are removed. > > Signed-off-by: Chuanhua Han > Co-developed-by: Barry Song > Signed-off-by: Barry Song > Reviewed-by: Ryan Roberts LGTM, Thanks! Feel free to add Reviewed-by: "Huang, Ying" in the future version. > --- > mm/memory.c | 59 +++++++++++++++++++++++++++++++++++++++++++---------- > 1 file changed, 48 insertions(+), 11 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index d9434df24d62..8b9e4cab93ed 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3968,6 +3968,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > pte_t pte; > vm_fault_t ret = 0; > void *shadow = NULL; > + int nr_pages; > + unsigned long page_idx; > + unsigned long address; > + pte_t *ptep; > > if (!pte_unmap_same(vmf)) > goto out; > @@ -4166,6 +4170,38 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > goto out_nomap; > } > > + nr_pages = 1; > + page_idx = 0; > + address = vmf->address; > + ptep = vmf->pte; > + if (folio_test_large(folio) && folio_test_swapcache(folio)) { > + int nr = folio_nr_pages(folio); > + unsigned long idx = folio_page_idx(folio, page); > + unsigned long folio_start = address - idx * PAGE_SIZE; > + unsigned long folio_end = folio_start + nr * PAGE_SIZE; > + pte_t *folio_ptep; > + pte_t folio_pte; > + > + if (unlikely(folio_start < max(address & PMD_MASK, vma->vm_start))) > + goto check_folio; > + if (unlikely(folio_end > pmd_addr_end(address, vma->vm_end))) > + goto check_folio; > + > + folio_ptep = vmf->pte - idx; > + folio_pte = ptep_get(folio_ptep); > + if (!pte_same(folio_pte, pte_move_swp_offset(vmf->orig_pte, -idx)) || > + swap_pte_batch(folio_ptep, nr, folio_pte) != nr) > + goto check_folio; > + > + page_idx = idx; > + address = folio_start; > + ptep = folio_ptep; > + nr_pages = nr; > + entry = folio->swap; > + page = &folio->page; > + } > + > +check_folio: > /* > * PG_anon_exclusive reuses PG_mappedtodisk for anon pages. A swap pte > * must never point at an anonymous page in the swapcache that is > @@ -4225,12 +4261,12 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > * We're already holding a reference on the page but haven't mapped it > * yet. > */ > - swap_free(entry); > + swap_free_nr(entry, nr_pages); > if (should_try_to_free_swap(folio, vma, vmf->flags)) > folio_free_swap(folio); > > - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); > - dec_mm_counter(vma->vm_mm, MM_SWAPENTS); > + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); > + add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages); > pte = mk_pte(page, vma->vm_page_prot); > > /* > @@ -4247,27 +4283,28 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > } > rmap_flags |= RMAP_EXCLUSIVE; > } > - flush_icache_page(vma, page); > + folio_ref_add(folio, nr_pages - 1); > + flush_icache_pages(vma, page, nr_pages); > if (pte_swp_soft_dirty(vmf->orig_pte)) > pte = pte_mksoft_dirty(pte); > if (pte_swp_uffd_wp(vmf->orig_pte)) > pte = pte_mkuffd_wp(pte); > - vmf->orig_pte = pte; > + vmf->orig_pte = pte_advance_pfn(pte, page_idx); > > /* ksm created a completely new copy */ > if (unlikely(folio != swapcache && swapcache)) { > - folio_add_new_anon_rmap(folio, vma, vmf->address); > + folio_add_new_anon_rmap(folio, vma, address); > folio_add_lru_vma(folio, vma); > } else { > - folio_add_anon_rmap_pte(folio, page, vma, vmf->address, > + folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, address, > rmap_flags); > } > > VM_BUG_ON(!folio_test_anon(folio) || > (pte_write(pte) && !PageAnonExclusive(page))); > - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); > - arch_do_swap_page_nr(vma->vm_mm, vma, vmf->address, > - pte, vmf->orig_pte, 1); > + set_ptes(vma->vm_mm, address, ptep, pte, nr_pages); > + arch_do_swap_page_nr(vma->vm_mm, vma, address, > + pte, pte, nr_pages); > > folio_unlock(folio); > if (folio != swapcache && swapcache) { > @@ -4291,7 +4328,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > } > > /* No need to invalidate - it was non-present before */ > - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); > + update_mmu_cache_range(vmf, vma, address, ptep, nr_pages); > unlock: > if (vmf->pte) > pte_unmap_unlock(vmf->pte, vmf->ptl); -- Best Regards, Huang, Ying