From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52115C5B552 for ; Tue, 10 Jun 2025 06:45:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E03A06B0089; Tue, 10 Jun 2025 02:45:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DDA776B008C; Tue, 10 Jun 2025 02:45:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D17876B0092; Tue, 10 Jun 2025 02:45:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B25256B0089 for ; Tue, 10 Jun 2025 02:45:40 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id BACD158E56 for ; Tue, 10 Jun 2025 06:45:39 +0000 (UTC) X-FDA: 83538555198.02.859F7D7 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf19.hostedemail.com (Postfix) with ESMTP id D80A01A0002 for ; Tue, 10 Jun 2025 06:45:37 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=R2fc0Uyw; spf=none (imf19.hostedemail.com: domain of BATV+631296521f62fa6b3e9f+7961+infradead.org+hch@bombadil.srs.infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=BATV+631296521f62fa6b3e9f+7961+infradead.org+hch@bombadil.srs.infradead.org; dmarc=fail reason="No valid SPF, DKIM not aligned (relaxed)" header.from=lst.de (policy=none) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749537938; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Zml69Q4sYAPv6I/P5JvCR5YeydDHzFR3JWYAyBJQ064=; b=NTzCPzkd34XtNFwhC3M0zbNXJtwxNYL7jLuKndrAdsmA4uIXKuL2H8S1peGvjoy53PKeif jH0OPFZzksNdTiS8vJ0vPdqg7eWAG8osxt8u+w2pkGJeIMC9NVBaLp1s3o3SeFWtGbmVMy 0Dr5mwpRil3XGUVwE/ycFkWKpDBfO3c= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=R2fc0Uyw; spf=none (imf19.hostedemail.com: domain of BATV+631296521f62fa6b3e9f+7961+infradead.org+hch@bombadil.srs.infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=BATV+631296521f62fa6b3e9f+7961+infradead.org+hch@bombadil.srs.infradead.org; dmarc=fail reason="No valid SPF, DKIM not aligned (relaxed)" header.from=lst.de (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749537938; a=rsa-sha256; cv=none; b=WMOC8l8e7FDSW5j4iQFTPErGeLdg5r8ypbdQxikpuPvX8AhZUHQvJ9gQP8MmcOaBMX4LKF 7fWhxJINmrIdulzWcmvlsXvKFb2uJ62VUjAPSq9NN4HTbg4OiatwnkpCtO9zvn/HdE6D3G 3qs73HEB22Yt+VIAYVFjAwCudaKSVdQ= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Zml69Q4sYAPv6I/P5JvCR5YeydDHzFR3JWYAyBJQ064=; b=R2fc0Uywnk3/Q4/eLEGq0xYpMT gdMsok6XPciAmyeZhkVHKp/X0x/2t36VcWBS7lv2O2kU8WbsNX5+bVuQZgBUaTauBBd3/DlP0JoI1 NfOjjGk5KK42+R+eCVR4+FGyBmaDG1nb9WKCe+q1hguDQ68q7ZurCzaMTMorkZYBzteVu4g3/BJgZ Pj+s5v20hJ7CR1wydOEkCnPp7dZuyCNino8pXw42UJu21KTIzNasehJ4FkvpPy2BlVzbfo17A0aiW IkuEHnTJWQTlcrzgLE3o703vbNjxoPm/ucIQ8f48dWVUpdKL+PzF27qpXyFU0YecjIW8drQlSDYIs x5fXHcxA==; Received: from 2a02-8389-2341-5b80-d601-7564-c2e0-491c.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:d601:7564:c2e0:491c] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.98.2 #2 (Red Hat Linux)) id 1uOrs7-00000005s0U-0DF9; Tue, 10 Jun 2025 05:50:07 +0000 From: Christoph Hellwig To: Andrew Morton , Hugh Dickins , Johannes Weiner , Yosry Ahmed , Nhat Pham Cc: Matthew Wilcox , Chengming Zhou , Baolin Wang , linux-mm@kvack.org Subject: [PATCH 2/6] mm: stop passing a writeback_control structure to shmem_writeout Date: Tue, 10 Jun 2025 07:49:38 +0200 Message-ID: <20250610054959.2057526-3-hch@lst.de> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250610054959.2057526-1-hch@lst.de> References: <20250610054959.2057526-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Rspamd-Queue-Id: D80A01A0002 X-Stat-Signature: dnwir54fd4npfke6t9e68dc8oqwhjht3 X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1749537937-79710 X-HE-Meta: U2FsdGVkX1+ZMboeqPX2x6bIJnIK/4r5xHyTjK0SSa2W6jeO9yXBTDokwvgqtQo/urGiqTzv5cH+lra/RiixQ/UX7/1O6y0+kmfXeuy+EHBjq/o+cHlSDkYMyH9a94j6Pg0L4exy7UN5A7N7XTUs6pBfXd4c4ZT2UWexdmD5avP9zCXIHzfHr8v8kn07/mmKwMdNnlFWNgTmhGzjZvwvA4uamczzT62wzzrOVQssTodGkmuDpPeznUWB5yhtOwdBQc3M72sbgqNvgUJEdn3/SYzoNP+0EPz8PLAt5O7H+YjUD3x/A2fpf/DSs3HwCHBqPXPx55IEaEAALwFDjBIwNIrjyFmiZQqlpWK+aD3CtxHGl5WPyhIlRPd7j0HOxrjrqcPL1AE7YgPFgMOtz/5OPFz3CyuIqxrx2sr/UUJ1/s2HyFCEoTAjya52AJKlgwKmKrUmwO5sc+kLXoIERuoO09nYQR5m/uC79MV4V6YdZGyIisO32S8bj8EIrmCsetrU+50JdmSLKmhRw29WQa5PhE0q6dyEqQ0B0KC00jExXwL9yTFhEi5PU6b9ajy6IrQYrbX/iIrJK92vjA2hlFLJzgI7MUb1mynKl0oBuPCX1TOnAvwgwaD5wOVpZdVhEYKLaPE2ggPGa37MYBXcT+ZkgScVxP5cXLT+ZwWXtcdq/frsUlbLesG31NZI//oAc9EwRQ5+oG14Rw6riTpniv/8AeM2tSU/XJAxKJtAITGoelaDxhtpDglmIwcUAdGP9UZyH0qmSJ2KjEdgjIkBbNvoDD/yLQvr7qUweIuTHHNpoLCf45WmCVGdjHasUEBuUBUprEOzGIgODZMoVKUbSzxrYvN1m87cOJv3MFqH/u0JPgzDxUq5wx0XWnaRJM35RovSg2EUxhYUDbLvIBd7g/YTGb1R2Iw140q0GkSA6b110hZUX/KVFL/zlHVvs6QTJLm8utSausUhbJAtSg8jkmW dWOY7PPy XJB+Wm6zgKlAnn/9DBy6ZZtrxdpQnBBtgHglKZftMiyXHO60MVIGeKWjXGkHqw64bqY2qYqUVXLsdEsgHkrU2nRd5dIU18FP7s0V1v5c0njM1ev94EPMBpRda+tfOJjDOCUEZbdcWUUv3CRdl/hcq4FZ8G2piacv+W3AhNMdap+FgcS2zbTlcL/bBbl9bQFpZH6qEbnndWPub/AQBCR7LDDrmqbEmafGogCQkwo7CXTIm9VnJykbjTmuWQ9oKJYjhsUDAjp+K9RfduNeTmPPp6aBgDQtmvjSL/DbT/APnt35Psjbfiwk7ccek3aCqpmrCAhln0LfQX4IHxwSwsF9SBWWpQCm1r6Oyvrn6njZkzxPslBKycA82gK3lKNtYdfagv7jv22UMa6PSWJRMzoxlsTG33zKHL+ncnmIFvPuXmOA/MLreXPbhQlmZH5/nuG9Dy8Fe4k77za51NRk06DwvsEwY37y0zO1L5i0ym5m54JohOOBacSXmT5FolqFhjJjTvLbAIaNh7RsB6gQ+3epp6JNYkszWxTpJAolmEQPQNW5ZgDEELaxYtevbIzSk5eAedFulzckHNSuQ7v3u8wq8k1+MOtmoGmWKCYG1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: shmem_writeout only needs the swap_iocb cookie and the split folio list. Pass those explicitly and remove the now unused list member from struct writeback_control. Signed-off-by: Christoph Hellwig --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 2 +- drivers/gpu/drm/ttm/ttm_backup.c | 9 +------- include/linux/shmem_fs.h | 5 ++++- include/linux/writeback.h | 3 --- mm/shmem.c | 26 +++++++++++++---------- mm/vmscan.c | 12 +++++------ 6 files changed, 26 insertions(+), 31 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 19a3eb82dc6a..24d8daa4fdb3 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -317,7 +317,7 @@ void __shmem_writeback(size_t size, struct address_space *mapping) if (folio_mapped(folio)) folio_redirty_for_writepage(&wbc, folio); else - error = shmem_writeout(folio, &wbc); + error = shmem_writeout(folio, NULL, NULL); } } diff --git a/drivers/gpu/drm/ttm/ttm_backup.c b/drivers/gpu/drm/ttm/ttm_backup.c index ffaab68bd5dd..6f2e58be4f3e 100644 --- a/drivers/gpu/drm/ttm/ttm_backup.c +++ b/drivers/gpu/drm/ttm/ttm_backup.c @@ -112,15 +112,8 @@ ttm_backup_backup_page(struct file *backup, struct page *page, if (writeback && !folio_mapped(to_folio) && folio_clear_dirty_for_io(to_folio)) { - struct writeback_control wbc = { - .sync_mode = WB_SYNC_NONE, - .nr_to_write = SWAP_CLUSTER_MAX, - .range_start = 0, - .range_end = LLONG_MAX, - .for_reclaim = 1, - }; folio_set_reclaim(to_folio); - ret = shmem_writeout(to_folio, &wbc); + ret = shmem_writeout(to_folio, NULL, NULL); if (!folio_test_writeback(to_folio)) folio_clear_reclaim(to_folio); /* diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 5f03a39a26f7..6d0f9c599ff7 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -11,6 +11,8 @@ #include #include +struct swap_iocb; + /* inode in-kernel data */ #ifdef CONFIG_TMPFS_QUOTA @@ -107,7 +109,8 @@ static inline bool shmem_mapping(struct address_space *mapping) void shmem_unlock_mapping(struct address_space *mapping); struct page *shmem_read_mapping_page_gfp(struct address_space *mapping, pgoff_t index, gfp_t gfp_mask); -int shmem_writeout(struct folio *folio, struct writeback_control *wbc); +int shmem_writeout(struct folio *folio, struct swap_iocb **plug, + struct list_head *folio_list); void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end); int shmem_unuse(unsigned int type); diff --git a/include/linux/writeback.h b/include/linux/writeback.h index eda4b62511f7..82f217970092 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -79,9 +79,6 @@ struct writeback_control { */ struct swap_iocb **swap_plug; - /* Target list for splitting a large folio */ - struct list_head *list; - /* internal fields used by the ->writepages implementation: */ struct folio_batch fbatch; pgoff_t index; diff --git a/mm/shmem.c b/mm/shmem.c index 0c5fb4ffa03a..71039b9847c5 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1540,11 +1540,13 @@ int shmem_unuse(unsigned int type) /** * shmem_writeout - Write the folio to swap * @folio: The folio to write - * @wbc: How writeback is to be done + * @plug: swap plug + * @folio_list: list to put back folios on split * * Move the folio from the page cache to the swap cache. */ -int shmem_writeout(struct folio *folio, struct writeback_control *wbc) +int shmem_writeout(struct folio *folio, struct swap_iocb **plug, + struct list_head *folio_list) { struct address_space *mapping = folio->mapping; struct inode *inode = mapping->host; @@ -1554,9 +1556,6 @@ int shmem_writeout(struct folio *folio, struct writeback_control *wbc) int nr_pages; bool split = false; - if (WARN_ON_ONCE(!wbc->for_reclaim)) - goto redirty; - if ((info->flags & VM_LOCKED) || sbinfo->noswap) goto redirty; @@ -1583,7 +1582,7 @@ int shmem_writeout(struct folio *folio, struct writeback_control *wbc) try_split: /* Ensure the subpages are still dirty */ folio_test_set_dirty(folio); - if (split_folio_to_list(folio, wbc->list)) + if (split_folio_to_list(folio, folio_list)) goto redirty; folio_clear_dirty(folio); } @@ -1636,13 +1635,21 @@ int shmem_writeout(struct folio *folio, struct writeback_control *wbc) list_add(&info->swaplist, &shmem_swaplist); if (!folio_alloc_swap(folio, __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN)) { + struct writeback_control wbc = { + .sync_mode = WB_SYNC_NONE, + .nr_to_write = SWAP_CLUSTER_MAX, + .range_start = 0, + .range_end = LLONG_MAX, + .for_reclaim = 1, + .swap_plug = plug, + }; shmem_recalc_inode(inode, 0, nr_pages); swap_shmem_alloc(folio->swap, nr_pages); shmem_delete_from_page_cache(folio, swp_to_radix_entry(folio->swap)); mutex_unlock(&shmem_swaplist_mutex); BUG_ON(folio_mapped(folio)); - return swap_writeout(folio, wbc); + return swap_writeout(folio, &wbc); } if (!info->swapped) list_del_init(&info->swaplist); @@ -1651,10 +1658,7 @@ int shmem_writeout(struct folio *folio, struct writeback_control *wbc) goto try_split; redirty: folio_mark_dirty(folio); - if (wbc->for_reclaim) - return AOP_WRITEPAGE_ACTIVATE; /* Return with folio locked */ - folio_unlock(folio); - return 0; + return AOP_WRITEPAGE_ACTIVATE; /* Return with folio locked */ } EXPORT_SYMBOL_GPL(shmem_writeout); diff --git a/mm/vmscan.c b/mm/vmscan.c index 3af99b0978b5..d56a4e8a1ed9 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -669,15 +669,13 @@ static pageout_t writeout(struct folio *folio, struct address_space *mapping, /* * The large shmem folio can be split if CONFIG_THP_SWAP is not enabled - * or we failed to allocate contiguous swap entries. + * or we failed to allocate contiguous swap entries, in which case + * the split out folios get added back to folio_list. */ - if (shmem_mapping(mapping)) { - if (folio_test_large(folio)) - wbc.list = folio_list; - res = shmem_writeout(folio, &wbc); - } else { + if (shmem_mapping(mapping)) + res = shmem_writeout(folio, plug, folio_list); + else res = swap_writeout(folio, &wbc); - } if (res < 0) handle_write_error(mapping, folio, res); -- 2.47.2