From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0FF7ECAAD5 for ; Fri, 2 Sep 2022 19:47:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 799A980104; Fri, 2 Sep 2022 15:47:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A8FBB80105; Fri, 2 Sep 2022 15:47:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 38B2F8010B; Fri, 2 Sep 2022 15:47:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 78F3D80101 for ; Fri, 2 Sep 2022 15:47:03 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 60631801EA for ; Fri, 2 Sep 2022 19:47:03 +0000 (UTC) X-FDA: 79868178726.07.6E53BD1 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id E2693180057 for ; Fri, 2 Sep 2022 19:47:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FT/VSNYzJ0BZ/8MYWflRb8frQ+3dgt9MONPtkWqWVfM=; b=rBg4UXezFrU28rQ2oaOnmVBZo2 8tjBpRCgvFLduED8+P/LYWFa4ZlVLfX6lHfoXhMqyl6WI4OB6Wctfw/GPMzHFw8fAayUaYOz+etyF VvB7nAaqYv3GUEzetSyMKksT2ou+86O4/Xxgqq7eoKt5dWfNXNpD9ijk4mU9zKtPTTZsxeXMzMstq wChkd2x/qDc/9y8C9wbuS70jkR9g+MgwT6xKnyxEfmDSB3aGTMpJ6QH2O2KDw9cmiPbhDF29nJP+Y WX4QpBUQJHP9Uw8Jv1v667KyUromRvc+t2RjtFUTZY4KYXEpJfN036FBeew3yrMIVKJNi5GtZh7hA DcYXvGWw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oUCd4-007Ifi-MS; Fri, 02 Sep 2022 19:47:02 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH v2 42/57] huge_memory: Convert do_huge_pmd_wp_page() to use a folio Date: Fri, 2 Sep 2022 20:46:38 +0100 Message-Id: <20220902194653.1739778-43-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220902194653.1739778-1-willy@infradead.org> References: <20220902194653.1739778-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662148023; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FT/VSNYzJ0BZ/8MYWflRb8frQ+3dgt9MONPtkWqWVfM=; b=G2DtP5dBjw48OkKMUUmxe+KkrJwp2QSkqO63S/fcfH51o6EcHo7cq2sW3luWJOBexl9C2o neO9XGSDRHZbET893+zck9a5KDZwFm51JVtrEIcp/Qm/biF6280glAaTEuf5tcLLlp8aJn psr7Be354hwV2H1bGtFeK3l8TuZvO0U= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=rBg4UXez; dmarc=none; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662148023; a=rsa-sha256; cv=none; b=5f1TadPuev2YRI0W288oHdWqJ/nzLPLU8WxKiRhlJ7WFRSWZfEz8m/mMC4INhY5QVsnPHe k/ojz2vEFOtFgC6w5bIVO7GqDimSmAbThTu2DJq542Y+eJtTaBAveTEjm2JvtkD6xTg7KC o2tFZyMwW8ZJ4y19JzXjdR/+IVXs1d0= X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: E2693180057 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=rBg4UXez; dmarc=none; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Stat-Signature: pcbxmzsc7fgxqbmp6frmn6cmtir97afg X-HE-Tag: 1662148022-648896 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Removes many calls to compound_head(). Does not remove the assumption that a folio may not be larger than a PMD. Signed-off-by: Matthew Wilcox (Oracle) --- mm/huge_memory.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e9414ee57c5b..ffbc0412be1b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1307,6 +1307,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) { const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE; struct vm_area_struct *vma = vmf->vma; + struct folio *folio; struct page *page; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; pmd_t orig_pmd = vmf->orig_pmd; @@ -1328,46 +1329,48 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) } page = pmd_page(orig_pmd); + folio = page_folio(page); VM_BUG_ON_PAGE(!PageHead(page), page); /* Early check when only holding the PT lock. */ if (PageAnonExclusive(page)) goto reuse; - if (!trylock_page(page)) { - get_page(page); + if (!folio_trylock(folio)) { + folio_get(folio); spin_unlock(vmf->ptl); - lock_page(page); + folio_lock(folio); spin_lock(vmf->ptl); if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) { spin_unlock(vmf->ptl); - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); return 0; } - put_page(page); + folio_put(folio); } /* Recheck after temporarily dropping the PT lock. */ if (PageAnonExclusive(page)) { - unlock_page(page); + folio_unlock(folio); goto reuse; } /* - * See do_wp_page(): we can only reuse the page exclusively if there are - * no additional references. Note that we always drain the LRU - * pagevecs immediately after adding a THP. + * See do_wp_page(): we can only reuse the folio exclusively if + * there are no additional references. Note that we always drain + * the LRU pagevecs immediately after adding a THP. */ - if (page_count(page) > 1 + PageSwapCache(page) * thp_nr_pages(page)) + if (folio_ref_count(folio) > + 1 + folio_test_swapcache(folio) * folio_nr_pages(folio)) goto unlock_fallback; - if (PageSwapCache(page)) - try_to_free_swap(page); - if (page_count(page) == 1) { + if (folio_test_swapcache(folio)) + folio_free_swap(folio); + if (folio_ref_count(folio) == 1) { pmd_t entry; page_move_anon_rmap(page, vma); - unlock_page(page); + folio_unlock(folio); reuse: if (unlikely(unshare)) { spin_unlock(vmf->ptl); @@ -1382,7 +1385,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) } unlock_fallback: - unlock_page(page); + folio_unlock(folio); spin_unlock(vmf->ptl); fallback: __split_huge_pmd(vma, vmf->pmd, vmf->address, false, NULL); -- 2.35.1