From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC27AC433DB for ; Sat, 20 Mar 2021 05:43:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 84BB36193E for ; Sat, 20 Mar 2021 05:43:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 84BB36193E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 287F06B009C; Sat, 20 Mar 2021 01:43:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 21FED8D0010; Sat, 20 Mar 2021 01:43:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09ADA8D000F; Sat, 20 Mar 2021 01:43:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0195.hostedemail.com [216.40.44.195]) by kanga.kvack.org (Postfix) with ESMTP id DE5636B009C for ; Sat, 20 Mar 2021 01:43:48 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A01C418016824 for ; Sat, 20 Mar 2021 05:43:48 +0000 (UTC) X-FDA: 77939160936.33.F665946 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 38DD3E0001AC for ; Sat, 20 Mar 2021 05:43:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=mdk9ehoqihBuT1oolJ8pbojaRvEw95snsUBSZv8+qPM=; b=Kgnyt8iYymxgDiEKmqyHBxaZjt loC2gKeif3QUecjuepsKqk/h1wEuNXQNeIZXLuvl+RyfhK+1jk2snr2wtpq9EwZ8Iy9iz4yDHYwQl JeiicGRjn233zRg6LBgcqRhSwWxIUn9hB62ViHxWNrttcQUMN32J6Zwhl6qlQ4l3uO/tmYl9fz5BA RVy9QMS9mWhD/p8X3uut4QHKdEiSu575Sn0hQZBzZ0I441DNnnNcV7YbK4n8y14ApgrMfy9H5ixln 7zYWUlDiUqVl/9okCI8/hsi9CFZIsFQO86FpUa+d1h6LYEx90Xl2JuK5MQVJ/pbX+PZmbBAMrjICZ phlGINfg==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lNUOZ-005SfS-Ug; Sat, 20 Mar 2021 05:43:34 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v5 21/27] mm/filemap: Add end_folio_writeback Date: Sat, 20 Mar 2021 05:40:58 +0000 Message-Id: <20210320054104.1300774-22-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210320054104.1300774-1-willy@infradead.org> References: <20210320054104.1300774-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: m8goa1m7bfpksc3gdaqu735rub3upsdc X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 38DD3E0001AC Received-SPF: none (infradead.org>: No applicable sender policy available) receiver=imf13; identity=mailfrom; envelope-from=""; helo=casper.infradead.org; client-ip=90.155.50.34 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1616219028-667824 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add an end_page_writeback() wrapper function for users that are not yet converted to folios. end_folio_writeback() is less than half the size of end_page_writeback() at just 105 bytes compared to 213 bytes, due to removing all the compound_head() calls. The 30 byte wrapper function makes this a net saving of 70 bytes. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 3 ++- mm/filemap.c | 38 +++++++++++++++++++------------------- mm/folio-compat.c | 6 ++++++ 3 files changed, 27 insertions(+), 20 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a8e19e4e0b09..2ee6b1b9561c 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -809,7 +809,8 @@ static inline int wait_on_page_locked_killable(struct= page *page) int put_and_wait_on_page_locked(struct page *page, int state); void wait_on_page_writeback(struct page *page); int wait_on_page_writeback_killable(struct page *page); -extern void end_page_writeback(struct page *page); +void end_page_writeback(struct page *page); +void end_folio_writeback(struct folio *folio); void wait_for_stable_page(struct page *page); =20 void page_endio(struct page *page, bool is_write, int err); diff --git a/mm/filemap.c b/mm/filemap.c index 99758045ec2d..dc7deb8c36ee 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1175,11 +1175,11 @@ static void wake_up_page_bit(struct page *page, i= nt bit_nr) spin_unlock_irqrestore(&q->lock, flags); } =20 -static void wake_up_page(struct page *page, int bit) +static void wake_up_folio(struct folio *folio, int bit) { - if (!PageWaiters(page)) + if (!FolioWaiters(folio)) return; - wake_up_page_bit(page, bit); + wake_up_page_bit(&folio->page, bit); } =20 /* @@ -1473,38 +1473,38 @@ void unlock_page_private_2(struct page *page) EXPORT_SYMBOL(unlock_page_private_2); =20 /** - * end_page_writeback - end writeback against a page - * @page: the page + * end_folio_writeback - End writeback against a folio. + * @folio: The folio. */ -void end_page_writeback(struct page *page) +void end_folio_writeback(struct folio *folio) { /* * TestClearPageReclaim could be used here but it is an atomic * operation and overkill in this particular case. Failing to - * shuffle a page marked for immediate reclaim is too mild to + * shuffle a folio marked for immediate reclaim is too mild to * justify taking an atomic operation penalty at the end of - * ever page writeback. + * every folio writeback. */ - if (PageReclaim(page)) { - ClearPageReclaim(page); - rotate_reclaimable_page(page); + if (FolioReclaim(folio)) { + ClearFolioReclaim(folio); + rotate_reclaimable_page(&folio->page); } =20 /* - * Writeback does not hold a page reference of its own, relying + * Writeback does not hold a folio reference of its own, relying * on truncation to wait for the clearing of PG_writeback. - * But here we must make sure that the page is not freed and - * reused before the wake_up_page(). + * But here we must make sure that the folio is not freed and + * reused before the wake_up_folio(). */ - get_page(page); - if (!test_clear_page_writeback(page)) + get_folio(folio); + if (!test_clear_page_writeback(&folio->page)) BUG(); =20 smp_mb__after_atomic(); - wake_up_page(page, PG_writeback); - put_page(page); + wake_up_folio(folio, PG_writeback); + put_folio(folio); } -EXPORT_SYMBOL(end_page_writeback); +EXPORT_SYMBOL(end_folio_writeback); =20 /* * After completing I/O on a page, call this routine to update the page diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 02798abf19a1..d1a1dfe52589 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -17,3 +17,9 @@ void unlock_page(struct page *page) return unlock_folio(page_folio(page)); } EXPORT_SYMBOL(unlock_page); + +void end_page_writeback(struct page *page) +{ + return end_folio_writeback(page_folio(page)); +} +EXPORT_SYMBOL(end_page_writeback); --=20 2.30.2