From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DC3FC433F5 for ; Fri, 29 Apr 2022 19:24:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 146BF6B0088; Fri, 29 Apr 2022 15:23:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EB01F6B0089; Fri, 29 Apr 2022 15:23:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ADE1C6B008C; Fri, 29 Apr 2022 15:23:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 97DC26B0088 for ; Fri, 29 Apr 2022 15:23:48 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 7C636AFD for ; Fri, 29 Apr 2022 19:23:48 +0000 (UTC) X-FDA: 79410891336.11.D19558C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id DC4D240079 for ; Fri, 29 Apr 2022 19:23:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=66RQr5jogRoJnb7RIEyD6u6pEaL+uGBhK9oq+xBYitE=; b=jhyFJbWjOHE4er4vFeUVtNjlJz aAtcpnQnGf0KjS9q2/qmYYVNNY8k+0y4bFZX2vevFtGn9IimQpxBBOpiUeN16LksSdnfyo+u0Szny AuuvYWKXJyigUz9tomBkypEJMkt3ZHp4zKLrZWvx0xqtuMJ6ZpLaVZpk+eBBqhhxCEX5imONwn/MT 4L62AoUQ+KMR/GIG4n6tKS34W3E3YagIVfMr66k5Ia90XUJJD5AbGXKegMTsN8cjQOAJLKb1HKWj3 oOwEo/a4vLAYCWwAvkTvl2YTVIVnJEvPGpuWXUreZ2WBr81IWB9cFinqd+jBkccLA8MjYmGr/J9uC xoHUfDGg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDJ-00CjOu-Hh; Fri, 29 Apr 2022 19:23:37 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 07/21] swap: Convert add_to_swap() to take a folio Date: Fri, 29 Apr 2022 20:23:15 +0100 Message-Id: <20220429192329.3034378-8-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: DC4D240079 X-Stat-Signature: jh9up98xgzi4ed9nxpgoas4zthqef8q9 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=jhyFJbWj; dmarc=none; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-HE-Tag: 1651260222-985255 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The only caller already has a folio available, so this saves a conversion. Also convert the return type to boolean. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/swap.h | 6 +++--- mm/swap_state.c | 47 +++++++++++++++++++++++--------------------- mm/vmscan.c | 6 +++--- 3 files changed, 31 insertions(+), 28 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 147a9a173508..f87bb495e482 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -449,7 +449,7 @@ static inline unsigned long total_swapcache_pages(void) } extern void show_swap_cache_info(void); -extern int add_to_swap(struct page *page); +bool add_to_swap(struct folio *folio); extern void *get_shadow_from_swap_cache(swp_entry_t entry); extern int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp, void **shadowp); @@ -630,9 +630,9 @@ struct page *find_get_incore_page(struct address_space *mapping, pgoff_t index) return find_get_page(mapping, index); } -static inline int add_to_swap(struct page *page) +static inline bool add_to_swap(struct folio *folio) { - return 0; + return false; } static inline void *get_shadow_from_swap_cache(swp_entry_t entry) diff --git a/mm/swap_state.c b/mm/swap_state.c index 989ad18f5468..858d8904b06e 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -175,24 +175,26 @@ void __delete_from_swap_cache(struct page *page, } /** - * add_to_swap - allocate swap space for a page - * @page: page we want to move to swap + * add_to_swap - allocate swap space for a folio + * @folio: folio we want to move to swap * - * Allocate swap space for the page and add the page to the - * swap cache. Caller needs to hold the page lock. + * Allocate swap space for the folio and add the folio to the + * swap cache. + * + * Context: Caller needs to hold the folio lock. + * Return: Whether the folio was added to the swap cache. */ -int add_to_swap(struct page *page) +bool add_to_swap(struct folio *folio) { - struct folio *folio = page_folio(page); swp_entry_t entry; int err; - VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(!PageUptodate(page), page); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + VM_BUG_ON_FOLIO(!folio_test_uptodate(folio), folio); entry = folio_alloc_swap(folio); if (!entry.val) - return 0; + return false; /* * XArray node allocations from PF_MEMALLOC contexts could @@ -205,7 +207,7 @@ int add_to_swap(struct page *page) /* * Add it to the swap cache. */ - err = add_to_swap_cache(page, entry, + err = add_to_swap_cache(&folio->page, entry, __GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN, NULL); if (err) /* @@ -214,22 +216,23 @@ int add_to_swap(struct page *page) */ goto fail; /* - * Normally the page will be dirtied in unmap because its pte should be - * dirty. A special case is MADV_FREE page. The page's pte could have - * dirty bit cleared but the page's SwapBacked bit is still set because - * clearing the dirty bit and SwapBacked bit has no lock protected. For - * such page, unmap will not set dirty bit for it, so page reclaim will - * not write the page out. This can cause data corruption when the page - * is swap in later. Always setting the dirty bit for the page solves - * the problem. + * Normally the folio will be dirtied in unmap because its + * pte should be dirty. A special case is MADV_FREE page. The + * page's pte could have dirty bit cleared but the folio's + * SwapBacked flag is still set because clearing the dirty bit + * and SwapBacked flag has no lock protected. For such folio, + * unmap will not set dirty bit for it, so folio reclaim will + * not write the folio out. This can cause data corruption when + * the folio is swapped in later. Always setting the dirty flag + * for the folio solves the problem. */ - set_page_dirty(page); + folio_mark_dirty(folio); - return 1; + return true; fail: - put_swap_page(page, entry); - return 0; + put_swap_page(&folio->page, entry); + return false; } /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 19c1bcd886ef..8f7c32b3d65e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1710,8 +1710,8 @@ static unsigned int shrink_page_list(struct list_head *page_list, page_list)) goto activate_locked; } - if (!add_to_swap(page)) { - if (!PageTransHuge(page)) + if (!add_to_swap(folio)) { + if (!folio_test_large(folio)) goto activate_locked_split; /* Fallback to swap normal pages */ if (split_folio_to_list(folio, @@ -1720,7 +1720,7 @@ static unsigned int shrink_page_list(struct list_head *page_list, #ifdef CONFIG_TRANSPARENT_HUGEPAGE count_vm_event(THP_SWPOUT_FALLBACK); #endif - if (!add_to_swap(page)) + if (!add_to_swap(folio)) goto activate_locked_split; } -- 2.34.1