From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BFD2C6FA82 for ; Fri, 2 Sep 2022 19:47:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 56B53800EA; Fri, 2 Sep 2022 15:47:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EFBBF800EB; Fri, 2 Sep 2022 15:47:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CDA11800F1; Fri, 2 Sep 2022 15:47:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 88BB8800F0 for ; Fri, 2 Sep 2022 15:47:00 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 530BF407BA for ; Fri, 2 Sep 2022 19:47:00 +0000 (UTC) X-FDA: 79868178600.05.46533B9 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id DF1D920053 for ; Fri, 2 Sep 2022 19:46:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=wGo7P/d9Kjs6SsT3lbiCaIEX7mjwZhGrACXpSAcHWxA=; b=Ylgh8XpT47snrXf7jnSx7VHfmB 9HE90Nm9GgwRYpM2JpG8mVDLegqUp/ZbJmr55Fb9ldJHAKwf1kio+yaCR2cQTxArhLE9yZUQ/Uj97 MJw/1YUHF5Z2vwvkL6LAKvHGGo45uKKHkcXjqe+5XdRQe2JoYk9jQDyHyRD1AYxz+AWMC3bmxcdVw i0XZdT1S80C8JORNh3efu8Ye2CiHT+uuX1x32FSpQrq7FezK0R/N4YDq4OGO6FtekHyJZc+PuedPD 00ubfqgO5wWwuxrfdUqD7x0gEQ6ICm6Fc6fGOS8DW2a5ej5RVnxGorG7hVn3g2pg1Ec3+mJrGLpXF oIpuWS0w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oUCd1-007IcK-3k; Fri, 02 Sep 2022 19:46:59 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH v2 15/57] mm: Convert do_swap_page()'s swapcache variable to a folio Date: Fri, 2 Sep 2022 20:46:11 +0100 Message-Id: <20220902194653.1739778-16-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220902194653.1739778-1-willy@infradead.org> References: <20220902194653.1739778-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662148020; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wGo7P/d9Kjs6SsT3lbiCaIEX7mjwZhGrACXpSAcHWxA=; b=kWnVjqe3fANw2FIQ7sLJ3P9gpB0QTqEoyHuZMv1cUlI/BA1s+HKR3AogmR61L2FmwypHyv iIN8jB0k1a9GdnoEhFV1X0asgXxCwVPnhIByEceeLisdIuE5dBPLjdyJqNfFjgs3wdCzFl GkI+KeeZbW7JpoHdA/L0h3z0G2p4HrU= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Ylgh8XpT; dmarc=none; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662148020; a=rsa-sha256; cv=none; b=gFiujHQl5nguFGk65D9X9uLhCL9XsmeO4YJxPbUUuuO7qHkpLs83I9aESmKpI/gdBtV2aF sQzUtvAbfphOLcfEqV522iELAxcTTu3VL9MgYFZrR2srDifGBB2lAKR7dWKielY0YBjnBc AFsDleCob+vkYfmOOp4Gure5aVGEV74= X-Rspamd-Queue-Id: DF1D920053 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Ylgh8XpT; dmarc=none; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Rspamd-Server: rspam06 X-Stat-Signature: zrzsuxarqg1awazprtqdw1dwyyqbpjsh X-HE-Tag: 1662148019-403541 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The 'swapcache' variable is used to track whether the page is from the swapcache or not. It can do this equally well by being the folio of the page rather than the page itself, and this saves a number of calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/memory.c | 31 +++++++++++++++---------------- 1 file changed, 15 insertions(+), 16 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index f172b148e29b..0184fe0ae736 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3718,8 +3718,8 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf) vm_fault_t do_swap_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; - struct folio *folio; - struct page *page = NULL, *swapcache; + struct folio *swapcache, *folio = NULL; + struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE; bool exclusive = false; @@ -3762,11 +3762,11 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto out; page = lookup_swap_cache(entry, vma, vmf->address); - swapcache = page; if (page) folio = page_folio(page); + swapcache = folio; - if (!page) { + if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) { /* skip swapcache */ @@ -3799,12 +3799,12 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) } else { page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, vmf); - swapcache = page; if (page) folio = page_folio(page); + swapcache = folio; } - if (!page) { + if (!folio) { /* * Back out if somebody else faulted in this pte * while we released the pte lock. @@ -3856,7 +3856,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) page = ksm_might_need_to_copy(page, vma, vmf->address); if (unlikely(!page)) { ret = VM_FAULT_OOM; - page = swapcache; goto out_page; } folio = page_folio(page); @@ -3867,7 +3866,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * owner. Try removing the extra reference from the local LRU * pagevecs if required. */ - if ((vmf->flags & FAULT_FLAG_WRITE) && page == swapcache && + if ((vmf->flags & FAULT_FLAG_WRITE) && folio == swapcache && !folio_test_ksm(folio) && !folio_test_lru(folio)) lru_add_drain(); } @@ -3908,7 +3907,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * without __HAVE_ARCH_PTE_SWP_EXCLUSIVE. */ exclusive = pte_swp_exclusive(vmf->orig_pte); - if (page != swapcache) { + if (folio != swapcache) { /* * We have a fresh page that is not exposed to the * swapcache -> certainly exclusive. @@ -3976,7 +3975,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) vmf->orig_pte = pte; /* ksm created a completely new copy */ - if (unlikely(page != swapcache && swapcache)) { + if (unlikely(folio != swapcache && swapcache)) { page_add_new_anon_rmap(page, vma, vmf->address); folio_add_lru_vma(folio, vma); } else { @@ -3989,7 +3988,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte); folio_unlock(folio); - if (page != swapcache && swapcache) { + if (folio != swapcache && swapcache) { /* * Hold the lock to avoid the swap entry to be reused * until we take the PT lock for the pte_same() check @@ -3998,8 +3997,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * so that the swap count won't change under a * parallel locked swapcache. */ - unlock_page(swapcache); - put_page(swapcache); + folio_unlock(swapcache); + folio_put(swapcache); } if (vmf->flags & FAULT_FLAG_WRITE) { @@ -4023,9 +4022,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) folio_unlock(folio); out_release: folio_put(folio); - if (page != swapcache && swapcache) { - unlock_page(swapcache); - put_page(swapcache); + if (folio != swapcache && swapcache) { + folio_unlock(swapcache); + folio_put(swapcache); } if (si) put_swap_device(si); -- 2.35.1