From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91BCEC25B08 for ; Mon, 8 Aug 2022 19:35:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3F9F494001F; Mon, 8 Aug 2022 15:35:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C579940020; Mon, 8 Aug 2022 15:35:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC00B940018; Mon, 8 Aug 2022 15:35:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id CD98A94001F for ; Mon, 8 Aug 2022 15:35:39 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id ADBB31A01A9 for ; Mon, 8 Aug 2022 19:35:39 +0000 (UTC) X-FDA: 79777429998.05.0DFE2FC Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id 64C1A80169 for ; Mon, 8 Aug 2022 19:35:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FPbPinexUBKsHg2vN+cvLFrHcGFh99O3FGVUIwS4RZY=; b=bkPkNS18cspTZZqhoCEQFZmS4O Xy5aV1N4sYKZeJQDzD7BNplQTLTaALIAE9BN3UgaWMnTN2zMRN7jSGQYzVHhuEZmdkGq48k8GZYJF dR9F2RbFMw5fwOmM+Wq2JpsIwPkpaXLFrBKcZxH+s1F1fzoaXX2zsK6ZVIpSCkG5GokjdoO6F37Nf gJlyw2IE5NM0NJCNNzy61AvXflqt8zoiMsgHDWnCF8FbYm2WeJkoCcwNKD7tFmVyqxrSGtQlo9yGI OVnqK1xMx0qE4fGnWOAMzVDUMttJ4dbOzp/tcvt5AJS+qANr7YUJ9UXF6rTrv3oPT/0fbX/WDD1n5 duxDlsCw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oL8XJ-00EB0F-R6; Mon, 08 Aug 2022 19:35:37 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , hughd@google.com Subject: [PATCH 45/59] huge_memory: Convert do_huge_pmd_wp_page() to use a folio Date: Mon, 8 Aug 2022 20:34:13 +0100 Message-Id: <20220808193430.3378317-46-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808193430.3378317-1-willy@infradead.org> References: <20220808193430.3378317-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659987339; a=rsa-sha256; cv=none; b=GR0QLazpwSaoeSJpB7PljX8KCa3eF08Q8PNGWn/SDEDgtdyc53FeFgy+by1ZrpsdCA/E3Q WUq85XyomOJUgDY2rtXMStdNnmjZSWQMH610poGzLapYUj9VRjp8nWL16zehF0V9KXZeAG +epZ5tRsGckawjE54l/V5ZeTkPuLMds= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659987339; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FPbPinexUBKsHg2vN+cvLFrHcGFh99O3FGVUIwS4RZY=; b=MKi70wNua2QbrRW1DA4diBg0EOJ1uOrKDWOtsGLH3Y+zw5DJsw1E45icoB2KSOyeYPmsGp q2160/0L0jI8aWUKFFpRuuT2TxowH4F+hXkPflj1k3BScm5jrkdYtuhq+XnuUWI4yjDwSz ruST5CazE7O+w+GmP4miFZz7uk4TqE4= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=bkPkNS18; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: an4j46iohcganbb9apdz7ymff9zgh3sg X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 64C1A80169 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=bkPkNS18; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1659987339-575222 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Removes many calls to compound_head(). Does not remove the assumption that a folio may not be larger than a PMD. Signed-off-by: Matthew Wilcox (Oracle) --- mm/huge_memory.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8a7c1b344abe..7b998f2083aa 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1313,6 +1313,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) { const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE; struct vm_area_struct *vma = vmf->vma; + struct folio *folio; struct page *page; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; pmd_t orig_pmd = vmf->orig_pmd; @@ -1334,46 +1335,48 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) } page = pmd_page(orig_pmd); + folio = page_folio(page); VM_BUG_ON_PAGE(!PageHead(page), page); /* Early check when only holding the PT lock. */ if (PageAnonExclusive(page)) goto reuse; - if (!trylock_page(page)) { - get_page(page); + if (!folio_trylock(folio)) { + folio_get(folio); spin_unlock(vmf->ptl); - lock_page(page); + folio_lock(folio); spin_lock(vmf->ptl); if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) { spin_unlock(vmf->ptl); - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); return 0; } - put_page(page); + folio_put(folio); } /* Recheck after temporarily dropping the PT lock. */ if (PageAnonExclusive(page)) { - unlock_page(page); + folio_unlock(folio); goto reuse; } /* - * See do_wp_page(): we can only reuse the page exclusively if there are - * no additional references. Note that we always drain the LRU - * pagevecs immediately after adding a THP. + * See do_wp_page(): we can only reuse the folio exclusively if + * there are no additional references. Note that we always drain + * the LRU pagevecs immediately after adding a THP. */ - if (page_count(page) > 1 + PageSwapCache(page) * thp_nr_pages(page)) + if (folio_ref_count(folio) > + 1 + folio_test_swapcache(folio) * folio_nr_pages(folio)) goto unlock_fallback; - if (PageSwapCache(page)) - try_to_free_swap(page); - if (page_count(page) == 1) { + if (folio_test_swapcache(folio)) + folio_free_swap(folio); + if (folio_ref_count(folio) == 1) { pmd_t entry; page_move_anon_rmap(page, vma); - unlock_page(page); + folio_unlock(folio); reuse: if (unlikely(unshare)) { spin_unlock(vmf->ptl); @@ -1388,7 +1391,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) } unlock_fallback: - unlock_page(page); + folio_unlock(folio); spin_unlock(vmf->ptl); fallback: __split_huge_pmd(vma, vmf->pmd, vmf->address, false, NULL); -- 2.35.1