From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7C53C4361B for ; Wed, 16 Dec 2020 18:47:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3C560233F6 for ; Wed, 16 Dec 2020 18:47:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3C560233F6 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 70E1E8D000D; Wed, 16 Dec 2020 13:47:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5CF9D8D0002; Wed, 16 Dec 2020 13:47:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 498BD8D000D; Wed, 16 Dec 2020 13:47:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0189.hostedemail.com [216.40.44.189]) by kanga.kvack.org (Postfix) with ESMTP id 33C158D0002 for ; Wed, 16 Dec 2020 13:47:14 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id F0B7B1EE6 for ; Wed, 16 Dec 2020 18:47:13 +0000 (UTC) X-FDA: 77600027946.27.suit05_20116132742e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id D4EEC3D663 for ; Wed, 16 Dec 2020 18:47:13 +0000 (UTC) X-HE-Tag: suit05_20116132742e X-Filterd-Recvd-Size: 10483 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Dec 2020 18:47:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=tZOfY+118CWH3KnZnfDFTdA9tQUY0P7yOd1XlANpu5w=; b=DwM1CuzooXH+o8npX92XMZeKFD cvBoPCBIfoq3VtHKUxFp280B+vZAVXE+QSI8r2nhbPW/IEnmY1V5EYjkHE2HyQQJkFPGnsja2FmR+ G4c8YMMO+qtcZjT09Qkzl5NlntCC+XVJJmwgxgER/AsY1Vvvoq3JNML8n8fltnuJY9Mc6JNe/5Hy6 8jXbfd1Lq28JiojKZz2WE1TOcJUe3WorMfae6A2u0acoSINEH6CQb3iumQtAYOAossFlYA5QxOr+P yPanez+f8U0u94UDtiG9LyjgFuwXLJROxnBMYmVX1t2rJz/p81c1IPFys0TPhLQxVB+o+ILzetGrP oPphJjrw==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kpbSh-00078l-Uj; Wed, 16 Dec 2020 18:23:43 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH 21/25] mm: Convert wait_on_page_bit to wait_on_folio_bit Date: Wed, 16 Dec 2020 18:23:31 +0000 Message-Id: <20201216182335.27227-22-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201216182335.27227-1-willy@infradead.org> References: <20201216182335.27227-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We must deal with folios here otherwise we'll get the wrong waitqueue and fail to receive wakeups. Signed-off-by: Matthew Wilcox (Oracle) --- fs/afs/write.c | 2 +- include/linux/pagemap.h | 14 ++++++----- mm/filemap.c | 54 ++++++++++++++++++----------------------- mm/page-writeback.c | 7 +++--- 4 files changed, 37 insertions(+), 40 deletions(-) diff --git a/fs/afs/write.c b/fs/afs/write.c index c9195fc67fd8..b58e7a69a464 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -852,7 +852,7 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf) #endif =20 if (PageWriteback(vmf->page) && - wait_on_page_bit_killable(vmf->page, PG_writeback) < 0) + wait_on_folio_bit_killable(page_folio(vmf->page), PG_writeback) < 0= ) return VM_FAULT_RETRY; =20 if (lock_page_killable(vmf->page) < 0) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 2283e58ebe32..ac4d3e2ac86c 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -717,8 +717,8 @@ static inline int lock_page_or_retry(struct page *pag= e, struct mm_struct *mm, * This is exported only for wait_on_page_locked/wait_on_page_writeback,= etc., * and should not be used directly. */ -extern void wait_on_page_bit(struct page *page, int bit_nr); -extern int wait_on_page_bit_killable(struct page *page, int bit_nr); +extern void wait_on_folio_bit(struct folio *folio, int bit_nr); +extern int wait_on_folio_bit_killable(struct folio *folio, int bit_nr); =20 /*=20 * Wait for a page to be unlocked. @@ -729,15 +729,17 @@ extern int wait_on_page_bit_killable(struct page *p= age, int bit_nr); */ static inline void wait_on_page_locked(struct page *page) { - if (PageLocked(page)) - wait_on_page_bit(compound_head(page), PG_locked); + struct folio *folio =3D page_folio(page); + if (FolioLocked(folio)) + wait_on_folio_bit(folio, PG_locked); } =20 static inline int wait_on_page_locked_killable(struct page *page) { - if (!PageLocked(page)) + struct folio *folio =3D page_folio(page); + if (!FolioLocked(folio)) return 0; - return wait_on_page_bit_killable(compound_head(page), PG_locked); + return wait_on_folio_bit_killable(folio, PG_locked); } =20 extern void put_and_wait_on_page_locked(struct page *page); diff --git a/mm/filemap.c b/mm/filemap.c index 3c5eb39452c3..a5925450ee13 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1075,7 +1075,7 @@ static int wake_page_function(wait_queue_entry_t *w= ait, unsigned mode, int sync, * * So update the flags atomically, and wake up the waiter * afterwards to avoid any races. This store-release pairs - * with the load-acquire in wait_on_page_bit_common(). + * with the load-acquire in wait_on_folio_bit_common(). */ smp_store_release(&wait->flags, flags | WQ_FLAG_WOKEN); wake_up_state(wait->private, mode); @@ -1156,7 +1156,7 @@ static void wake_up_folio(struct folio *folio, int = bit) } =20 /* - * A choice of three behaviors for wait_on_page_bit_common(): + * A choice of three behaviors for wait_on_folio_bit_common(): */ enum behavior { EXCLUSIVE, /* Hold ref to page and take the bit when woken, like @@ -1190,9 +1190,10 @@ static inline bool trylock_page_bit_common(struct = page *page, int bit_nr, /* How many times do we accept lock stealing from under a waiter? */ int sysctl_page_lock_unfairness =3D 5; =20 -static inline int wait_on_page_bit_common(wait_queue_head_t *q, - struct page *page, int bit_nr, int state, enum behavior behavior) +static inline int wait_on_folio_bit_common(struct folio *folio, int bit_= nr, + int state, enum behavior behavior) { + wait_queue_head_t *q =3D page_waitqueue(&folio->page); int unfairness =3D sysctl_page_lock_unfairness; struct wait_page_queue wait_page; wait_queue_entry_t *wait =3D &wait_page.wait; @@ -1201,8 +1202,8 @@ static inline int wait_on_page_bit_common(wait_queu= e_head_t *q, unsigned long pflags; =20 if (bit_nr =3D=3D PG_locked && - !PageUptodate(page) && PageWorkingset(page)) { - if (!PageSwapBacked(page)) { + !FolioUptodate(folio) && FolioWorkingset(folio)) { + if (!FolioSwapBacked(folio)) { delayacct_thrashing_start(); delayacct =3D true; } @@ -1212,7 +1213,7 @@ static inline int wait_on_page_bit_common(wait_queu= e_head_t *q, =20 init_wait(wait); wait->func =3D wake_page_function; - wait_page.page =3D page; + wait_page.page =3D &folio->page; wait_page.bit_nr =3D bit_nr; =20 repeat: @@ -1227,7 +1228,7 @@ static inline int wait_on_page_bit_common(wait_queu= e_head_t *q, * Do one last check whether we can get the * page bit synchronously. * - * Do the SetPageWaiters() marking before that + * Do the SetFolioWaiters() marking before that * to let any waker we _just_ missed know they * need to wake us up (otherwise they'll never * even go to the slow case that looks at the @@ -1238,8 +1239,8 @@ static inline int wait_on_page_bit_common(wait_queu= e_head_t *q, * lock to avoid races. */ spin_lock_irq(&q->lock); - SetPageWaiters(page); - if (!trylock_page_bit_common(page, bit_nr, wait)) + SetFolioWaiters(folio); + if (!trylock_page_bit_common(&folio->page, bit_nr, wait)) __add_wait_queue_entry_tail(q, wait); spin_unlock_irq(&q->lock); =20 @@ -1249,10 +1250,10 @@ static inline int wait_on_page_bit_common(wait_qu= eue_head_t *q, * see whether the page bit testing has already * been done by the wake function. * - * We can drop our reference to the page. + * We can drop our reference to the folio. */ if (behavior =3D=3D DROP) - put_page(page); + put_folio(folio); =20 /* * Note that until the "finish_wait()", or until @@ -1289,7 +1290,7 @@ static inline int wait_on_page_bit_common(wait_queu= e_head_t *q, * * And if that fails, we'll have to retry this all. */ - if (unlikely(test_and_set_bit(bit_nr, &page->flags))) + if (unlikely(test_and_set_bit(bit_nr, folio_flags(folio)))) goto repeat; =20 wait->flags |=3D WQ_FLAG_DONE; @@ -1329,19 +1330,17 @@ static inline int wait_on_page_bit_common(wait_qu= eue_head_t *q, return wait->flags & WQ_FLAG_WOKEN ? 0 : -EINTR; } =20 -void wait_on_page_bit(struct page *page, int bit_nr) +void wait_on_folio_bit(struct folio *folio, int bit_nr) { - wait_queue_head_t *q =3D page_waitqueue(page); - wait_on_page_bit_common(q, page, bit_nr, TASK_UNINTERRUPTIBLE, SHARED); + wait_on_folio_bit_common(folio, bit_nr, TASK_UNINTERRUPTIBLE, SHARED); } -EXPORT_SYMBOL(wait_on_page_bit); +EXPORT_SYMBOL(wait_on_folio_bit); =20 -int wait_on_page_bit_killable(struct page *page, int bit_nr) +int wait_on_folio_bit_killable(struct folio *folio, int bit_nr) { - wait_queue_head_t *q =3D page_waitqueue(page); - return wait_on_page_bit_common(q, page, bit_nr, TASK_KILLABLE, SHARED); + return wait_on_folio_bit_common(folio, bit_nr, TASK_KILLABLE, SHARED); } -EXPORT_SYMBOL(wait_on_page_bit_killable); +EXPORT_SYMBOL(wait_on_folio_bit_killable); =20 static int __wait_on_page_locked_async(struct page *page, struct wait_page_queue *wait, bool set) @@ -1393,11 +1392,8 @@ static int wait_on_page_locked_async(struct page *= page, */ void put_and_wait_on_page_locked(struct page *page) { - wait_queue_head_t *q; - - page =3D compound_head(page); - q =3D page_waitqueue(page); - wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE, DROP)= ; + wait_on_folio_bit_common(page_folio(page), PG_locked, + TASK_UNINTERRUPTIBLE, DROP); } =20 /** @@ -1530,16 +1526,14 @@ EXPORT_SYMBOL_GPL(page_endio); */ void __lock_folio(struct folio *folio) { - wait_queue_head_t *q =3D page_waitqueue(&folio->page); - wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_UNINTERRUPTIBL= E, + wait_on_folio_bit_common(folio, PG_locked, TASK_UNINTERRUPTIBLE, EXCLUSIVE); } EXPORT_SYMBOL(__lock_folio); =20 int __lock_folio_killable(struct folio *folio) { - wait_queue_head_t *q =3D page_waitqueue(&folio->page); - return wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_KILLABL= E, + return wait_on_folio_bit_common(folio, PG_locked, TASK_KILLABLE, EXCLUSIVE); } EXPORT_SYMBOL_GPL(__lock_folio_killable); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 586042472ac9..500ed9afcec2 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2826,9 +2826,10 @@ EXPORT_SYMBOL(__test_set_page_writeback); */ void wait_on_page_writeback(struct page *page) { - if (PageWriteback(page)) { - trace_wait_on_page_writeback(page, page_mapping(page)); - wait_on_page_bit(page, PG_writeback); + struct folio *folio =3D page_folio(page); + if (FolioWriteback(folio)) { + trace_wait_on_page_writeback(page, folio_mapping(folio)); + wait_on_folio_bit(folio, PG_writeback); } } EXPORT_SYMBOL_GPL(wait_on_page_writeback); --=20 2.29.2