From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D32EC6FA83 for ; Fri, 2 Sep 2022 19:48:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 572C58010B; Fri, 2 Sep 2022 15:47:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D54B180105; Fri, 2 Sep 2022 15:47:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5FC918010D; Fri, 2 Sep 2022 15:47:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 06E888010D for ; Fri, 2 Sep 2022 15:47:05 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D948114064B for ; Fri, 2 Sep 2022 19:47:04 +0000 (UTC) X-FDA: 79868178768.21.355379B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id 97E2640094 for ; Fri, 2 Sep 2022 19:47:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=kdoCdesbfuUTPNOWhOmgwG2UyPqEVn409F2dI96dWqc=; b=kApHbwzOb7mNkK5kTlJpcN0OKw frF4i6gptJP34ptVh0+puCGmVo6AARONHuMfg34JOTGngcAT5lG+cXu9npb2fVykHMctuxAmQNqBN QLPh8veUbqwTCYf/WwyKWuTHHYRlSOy50L7u8vOu1hgCVC/3mky9p6KvwV2fo0cnfP3t4WPEe8Sv3 TtCSszsHKawpGoQ/Clva8o/MLz6RN65MSJuMBb7ZCB6Ey4dBI1iWvXXJyg/t9XAXP63HFmfLspl/n qXitRjR8TKUMQtDXlcy5DG73LVAmdTp6OSLzpLJXuubBJqNELrw28lJ0E6modGRSK9H1cFyoCPbEY LARJAn5A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oUCd6-007IiG-CK; Fri, 02 Sep 2022 19:47:04 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH v2 52/57] huge_memory: Convert split_huge_page_to_list() to use a folio Date: Fri, 2 Sep 2022 20:46:48 +0100 Message-Id: <20220902194653.1739778-53-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220902194653.1739778-1-willy@infradead.org> References: <20220902194653.1739778-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=kApHbwzO; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662148024; a=rsa-sha256; cv=none; b=HN+PfAp6AHVa0Abgomb3PvTaXIUENTX/6kYrv5SEPbaMH9r0/F0KgLzvX6/7ylfu4nX2Je yLLEbXnfbHSafrjRfC3CqhKlGBMLX32uM0RejCzKuODJK7V76Xr2csY8gSzmHwZ5YOdPtu +4oQ031GJ6BfWThOJ1T6eQ8XxNWhilI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662148024; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kdoCdesbfuUTPNOWhOmgwG2UyPqEVn409F2dI96dWqc=; b=xmAjIlqjhTGk5bae9fkER3yqfhPsZHEV0hylkZoss7zOUG8BepwQjlT6JAXK/0FOMuFFiX BanUyyZP9SfXL3zGECyn515zUlJwQ8kAxrWbtPdf8LyuW3aqzUvPOl7SaqCjLY8QTkAEJ3 gEFIkfwve3Fk5nBvG4FYeedguNtm1JE= X-Rspamd-Queue-Id: 97E2640094 X-Rspam-User: Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=kApHbwzO; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam09 X-Stat-Signature: fm4yj6arbkrkomi3798sapr178rmykyy X-HE-Tag: 1662148024-671244 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Saves many calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/huge_memory.c | 49 ++++++++++++++++++++++++------------------------ 1 file changed, 24 insertions(+), 25 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ffbc0412be1b..366519eb2af8 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2614,27 +2614,26 @@ bool can_split_folio(struct folio *folio, int *pextra_pins) int split_huge_page_to_list(struct page *page, struct list_head *list) { struct folio *folio = page_folio(page); - struct page *head = &folio->page; - struct deferred_split *ds_queue = get_deferred_split_queue(head); - XA_STATE(xas, &head->mapping->i_pages, head->index); + struct deferred_split *ds_queue = get_deferred_split_queue(&folio->page); + XA_STATE(xas, &folio->mapping->i_pages, folio->index); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int extra_pins, ret; pgoff_t end; bool is_hzp; - VM_BUG_ON_PAGE(!PageLocked(head), head); - VM_BUG_ON_PAGE(!PageCompound(head), head); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); - is_hzp = is_huge_zero_page(head); - VM_WARN_ON_ONCE_PAGE(is_hzp, head); + is_hzp = is_huge_zero_page(&folio->page); + VM_WARN_ON_ONCE_FOLIO(is_hzp, folio); if (is_hzp) return -EBUSY; - if (PageWriteback(head)) + if (folio_test_writeback(folio)) return -EBUSY; - if (PageAnon(head)) { + if (folio_test_anon(folio)) { /* * The caller does not necessarily hold an mmap_lock that would * prevent the anon_vma disappearing so we first we take a @@ -2643,7 +2642,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) * is taken to serialise against parallel split or collapse * operations. */ - anon_vma = page_get_anon_vma(head); + anon_vma = page_get_anon_vma(&folio->page); if (!anon_vma) { ret = -EBUSY; goto out; @@ -2652,7 +2651,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) mapping = NULL; anon_vma_lock_write(anon_vma); } else { - mapping = head->mapping; + mapping = folio->mapping; /* Truncated ? */ if (!mapping) { @@ -2660,7 +2659,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) goto out; } - xas_split_alloc(&xas, head, compound_order(head), + xas_split_alloc(&xas, folio, folio_order(folio), mapping_gfp_mask(mapping) & GFP_RECLAIM_MASK); if (xas_error(&xas)) { ret = xas_error(&xas); @@ -2675,7 +2674,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) * but on 32-bit, i_size_read() takes an irq-unsafe seqlock, * which cannot be nested inside the page tree lock. So note * end now: i_size itself may be changed at any moment, but - * head page lock is good enough to serialize the trimming. + * folio lock is good enough to serialize the trimming. */ end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE); if (shmem_mapping(mapping)) @@ -2691,38 +2690,38 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) goto out_unlock; } - unmap_page(head); + unmap_page(&folio->page); /* block interrupt reentry in xa_lock and spinlock */ local_irq_disable(); if (mapping) { /* - * Check if the head page is present in page cache. - * We assume all tail are present too, if head is there. + * Check if the folio is present in page cache. + * We assume all tail are present too, if folio is there. */ xas_lock(&xas); xas_reset(&xas); - if (xas_load(&xas) != head) + if (xas_load(&xas) != folio) goto fail; } /* Prevent deferred_split_scan() touching ->_refcount */ spin_lock(&ds_queue->split_queue_lock); - if (page_ref_freeze(head, 1 + extra_pins)) { - if (!list_empty(page_deferred_list(head))) { + if (folio_ref_freeze(folio, 1 + extra_pins)) { + if (!list_empty(page_deferred_list(&folio->page))) { ds_queue->split_queue_len--; - list_del(page_deferred_list(head)); + list_del(page_deferred_list(&folio->page)); } spin_unlock(&ds_queue->split_queue_lock); if (mapping) { - int nr = thp_nr_pages(head); + int nr = folio_nr_pages(folio); - xas_split(&xas, head, thp_order(head)); - if (PageSwapBacked(head)) { - __mod_lruvec_page_state(head, NR_SHMEM_THPS, + xas_split(&xas, folio, folio_order(folio)); + if (folio_test_swapbacked(folio)) { + __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, -nr); } else { - __mod_lruvec_page_state(head, NR_FILE_THPS, + __lruvec_stat_mod_folio(folio, NR_FILE_THPS, -nr); filemap_nr_thps_dec(mapping); } -- 2.35.1