From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 652E2C433ED for ; Wed, 5 May 2021 15:46:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D443160C3D for ; Wed, 5 May 2021 15:46:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D443160C3D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6FDFA6B0070; Wed, 5 May 2021 11:46:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6AEF66B0071; Wed, 5 May 2021 11:46:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 54F3E6B0072; Wed, 5 May 2021 11:46:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0248.hostedemail.com [216.40.44.248]) by kanga.kvack.org (Postfix) with ESMTP id 3C2846B0070 for ; Wed, 5 May 2021 11:46:32 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id D87928249980 for ; Wed, 5 May 2021 15:46:31 +0000 (UTC) X-FDA: 78107604582.33.A05C072 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 7AFE02000264 for ; Wed, 5 May 2021 15:46:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=9R941ykjVZ43Vuvuc3jcrA3QBd4fmtOhKBtjjulVt9Y=; b=eV8TL4utlbyCo6Yz68otA8cbZT kHglfZNZNXEs9y83Lu2ALDR7y4Gud7hJuuoMuzWXRNhqdbSdvY353oPRohtYtR70hypta822dR/vo u5IOWtP1Ahj4F0t0A2tEoN/TUCaIcLSFaQLE8qMH6bKX8ZfkmsJPTjyERfq11WvBsaR0TX+1U9vlA ApU7EfAHRYV9RnckaIMYI0iXc7pJTmBszW8IvgSW+ehk7R3EKkAR0Khb9CFz10Fzl5+z9/w173t12 TfvfCd/ZFS9JxZiAaI1Z+ATiGUg8j903ThRbvwp6spj88ZdcIYjT4x8IQQyS9xLaogZrxJ7FlE5jK JwgbQ+Jw==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1leJgv-000WLB-44; Wed, 05 May 2021 15:44:21 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Christoph Hellwig , Jeff Layton Subject: [PATCH v9 32/96] mm/filemap: Add folio_end_writeback Date: Wed, 5 May 2021 16:05:24 +0100 Message-Id: <20210505150628.111735-33-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210505150628.111735-1-willy@infradead.org> References: <20210505150628.111735-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=eV8TL4ut; dmarc=none; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Stat-Signature: rdg3za98ffxind8cp57cfknqdn33ms79 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 7AFE02000264 Received-SPF: none (infradead.org>: No applicable sender policy available) receiver=imf28; identity=mailfrom; envelope-from=""; helo=casper.infradead.org; client-ip=90.155.50.34 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1620229591-542731 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add an end_page_writeback() wrapper function for users that are not yet converted to folios. folio_end_writeback() is less than half the size of end_page_writeback() at just 105 bytes compared to 213 bytes, due to removing all the compound_head() calls. The 30 byte wrapper function makes this a net saving of 70 bytes. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton --- include/linux/pagemap.h | 3 ++- mm/filemap.c | 40 ++++++++++++++++++++-------------------- mm/folio-compat.c | 6 ++++++ 3 files changed, 28 insertions(+), 21 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 615f5b3e65c4..f1078272fb26 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -829,7 +829,8 @@ static inline int wait_on_page_locked_killable(struct= page *page) int put_and_wait_on_page_locked(struct page *page, int state); void wait_on_page_writeback(struct page *page); int wait_on_page_writeback_killable(struct page *page); -extern void end_page_writeback(struct page *page); +void end_page_writeback(struct page *page); +void folio_end_writeback(struct folio *folio); void wait_for_stable_page(struct page *page); =20 void page_endio(struct page *page, bool is_write, int err); diff --git a/mm/filemap.c b/mm/filemap.c index 06cb717c7c60..9d2cfa5d3a40 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1175,11 +1175,11 @@ static void wake_up_page_bit(struct page *page, i= nt bit_nr) spin_unlock_irqrestore(&q->lock, flags); } =20 -static void wake_up_page(struct page *page, int bit) +static void folio_wake(struct folio *folio, int bit) { - if (!PageWaiters(page)) + if (!folio_waiters(folio)) return; - wake_up_page_bit(page, bit); + wake_up_page_bit(&folio->page, bit); } =20 /* @@ -1514,38 +1514,38 @@ int wait_on_page_private_2_killable(struct page *= page) EXPORT_SYMBOL(wait_on_page_private_2_killable); =20 /** - * end_page_writeback - end writeback against a page - * @page: the page + * folio_end_writeback - End writeback against a folio. + * @folio: The folio. */ -void end_page_writeback(struct page *page) +void folio_end_writeback(struct folio *folio) { /* - * TestClearPageReclaim could be used here but it is an atomic + * folio_test_clear_reclaim_flag() could be used here but it is an atom= ic * operation and overkill in this particular case. Failing to - * shuffle a page marked for immediate reclaim is too mild to + * shuffle a folio marked for immediate reclaim is too mild to * justify taking an atomic operation penalty at the end of - * ever page writeback. + * every folio writeback. */ - if (PageReclaim(page)) { - ClearPageReclaim(page); - folio_rotate_reclaimable(page_folio(page)); + if (folio_reclaim(folio)) { + folio_clear_reclaim_flag(folio); + folio_rotate_reclaimable(folio); } =20 /* - * Writeback does not hold a page reference of its own, relying + * Writeback does not hold a folio reference of its own, relying * on truncation to wait for the clearing of PG_writeback. - * But here we must make sure that the page is not freed and - * reused before the wake_up_page(). + * But here we must make sure that the folio is not freed and + * reused before the folio_wake(). */ - get_page(page); - if (!test_clear_page_writeback(page)) + folio_get(folio); + if (!test_clear_page_writeback(&folio->page)) BUG(); =20 smp_mb__after_atomic(); - wake_up_page(page, PG_writeback); - put_page(page); + folio_wake(folio, PG_writeback); + folio_put(folio); } -EXPORT_SYMBOL(end_page_writeback); +EXPORT_SYMBOL(folio_end_writeback); =20 /* * After completing I/O on a page, call this routine to update the page diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 91b3d00a92f7..526843d03d58 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -17,3 +17,9 @@ void unlock_page(struct page *page) return folio_unlock(page_folio(page)); } EXPORT_SYMBOL(unlock_page); + +void end_page_writeback(struct page *page) +{ + return folio_end_writeback(page_folio(page)); +} +EXPORT_SYMBOL(end_page_writeback); --=20 2.30.2