From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7272CC3ABD8 for ; Fri, 16 May 2025 07:42:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6FACF6B00D7; Fri, 16 May 2025 03:41:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6AF216B00D8; Fri, 16 May 2025 03:41:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4D5B56B00D9; Fri, 16 May 2025 03:41:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 2D6186B00D7 for ; Fri, 16 May 2025 03:41:59 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 47BBCE19F5 for ; Fri, 16 May 2025 07:42:00 +0000 (UTC) X-FDA: 83447977200.29.508B5C0 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf04.hostedemail.com (Postfix) with ESMTP id 930634000B for ; Fri, 16 May 2025 07:41:58 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=XB5cbmpb; spf=none (imf04.hostedemail.com: domain of BATV+a1816c9c51017899399b+7936+infradead.org+hch@bombadil.srs.infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=BATV+a1816c9c51017899399b+7936+infradead.org+hch@bombadil.srs.infradead.org; dmarc=fail reason="No valid SPF, DKIM not aligned (relaxed)" header.from=lst.de (policy=none) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747381318; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SCTPdUBXdp/+QVUbi0hzZ8rwCAAf4gg9I9SGJVTUxlQ=; b=EYh73sRChM28mEdJ2EwEln4Tm5udmZH3R7JYabFaOXauxHLLNd7DU1iZMz8pIUTTLJcyYc sKl3jfBiQ8N+t/obUsX5wRNXgpbmgy7Px/OTnuFV3MORs4ZfMbHkM/Y1L2pM4Mwjc3dR6r jOIs0LQAhHraqXWHQ3n/Z9axxAR1/KA= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=XB5cbmpb; spf=none (imf04.hostedemail.com: domain of BATV+a1816c9c51017899399b+7936+infradead.org+hch@bombadil.srs.infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=BATV+a1816c9c51017899399b+7936+infradead.org+hch@bombadil.srs.infradead.org; dmarc=fail reason="No valid SPF, DKIM not aligned (relaxed)" header.from=lst.de (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747381318; a=rsa-sha256; cv=none; b=toMtDKDQ7hTMXPHK2sIMRyl4/EBBOvJ9UNj3IVFBnj7bq7KOuxq0+J6XQ4VDeGo4VeYHsp sy9Jh1/VHLefFy8FhQFpcIo8FH7xZ0hBOUEHVJrEajoxgjpSNYRQ9BwpH26dUk8BGOAwJc qFUfCVqr44WlowPaovESr1Htk1bbpCk= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=SCTPdUBXdp/+QVUbi0hzZ8rwCAAf4gg9I9SGJVTUxlQ=; b=XB5cbmpbR/EeuZ6ljs2/d7hUvq U34X2o19e7DyCNJ0oZf2FHGXaeJTZPIPAhRaTvRZH6NrCPF8VMvG/RH7eakXNhy4kcty8NIIRLsSc t9kVUq00vCJnhP/YljOJdPOdplKmhftroHq/pHCV4VWUoBJMTb2A/0t7Sh08w4VTzXb4ZPeEEOV8Q 1DxWtgUvTjsKxfuEOFt0LeSD/t9UZvRbOq75tTFGntETncDdYZGhsM3Q5tVy/u1zlKLVukjI74j4W KDEk5uTbhdHrvtwSOl6vMzbnh2lIMXWu0wU4nEsJwKLVD7STaOYPCXAm6hAQJb5f/vB2CX2fppULE FHqKU25Q==; Received: from 2a02-8389-2341-5b80-8ad1-b0a3-ff78-473d.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:8ad1:b0a3:ff78:473d] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.98.2 #2 (Red Hat Linux)) id 1uFpha-00000002gjR-0X8m; Fri, 16 May 2025 07:41:54 +0000 From: Christoph Hellwig To: Andrew Morton , Hugh Dickins , Johannes Weiner , Yosry Ahmed , Nhat Pham Cc: Matthew Wilcox , Chengming Zhou , Baolin Wang , linux-mm@kvack.org Subject: [PATCH 2/5] mm: stop passing a writeback_control structure to shmem_writeout Date: Fri, 16 May 2025 09:40:36 +0200 Message-ID: <20250516074146.178314-3-hch@lst.de> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250516074146.178314-1-hch@lst.de> References: <20250516074146.178314-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Rspamd-Queue-Id: 930634000B X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: ripd3thi8r8exjezqwzecz3ferc31hwn X-HE-Tag: 1747381318-288261 X-HE-Meta: U2FsdGVkX1/FfMuzT5AMo1oxoXB8T98NHx4gMOjWsbfzYPdnbovcnoLuYy+hdhtSnDKcXOrj1paVpfaduGk9B3NKTFzo1eQDX08QZETD1+WbMiYoIkjCzy7Nx6ObTXcBlMQoErr00jKniR4RSS7b+Iy7leD3Hbgzd5lE/VTAFJHY3bTIUksYXLZY/3+jzhNiLn2D4xNKb9Y/HX0OL9nbQJ1DYW+oigzeCb91TBEkmxHXrh9Nf3x/dRBW+Fp2nqV0qQEzGbEQ5ZbJQ+k2sDslBls9uoXR4haIjAuhM01nOYOuUBzprXnYWutknvaE4339u5KZW6lTxPkstfa4iYQPwqaCWgrMHr1eoIk5oI/j3AjqDCpME+TufL0Ba/XDZr3/ZIEL1sLgcDWMEIHze483/ASikuS8rA0eMqIhNk2QYn+oPlt1saszfEQ72UQXEyAB8xiI6KOgqPpxctfacUKte+mQzNqObddDlqtiLtgVugvIuMp/4v+u9bxM8mWFRkDceaKQbBs7fdfZhWPUmsmO8aAq7fVM6UqcerQRX7j0vS60HxUndb1ywHZHqS50FCK/BKGrRLeWQc+kpCbEsuZmlvVGYd4FxtxJs1NKZRDwJCJ4LGcabTYbut8Cqe2sXQrCKI1RloTzG6a0GGw1krAYGn6qAeq9MOHmfzKu97xbCKW74Gg7UyzzRcxilbsKNP5fEct+78P6KaINx7XzqhZEhhwut3Ax96O2lOVWx6N3WsCYgbpEYlzyqhIFWFEYU342ntUiTmirdplSt8bjYIi5HAcPT5qJqfz33RlM10VaVVmYktFzaqEysM8Fa1zpC+lK3mwJrpVNrvmxX7SH5EKwr+w9E8CQzNJmH3bvXWr7K07dKcjuvKm7bm5L64IK9hRUQI8wawkOkEDcX789cQAi2A9EO79Qak9SM1i0rAIbeXsZyk+CGmbz70mt+eQHcGzkWckkNzQSlmR8N9tKTWr pH27r7GG 9VAwJzeXpIo3gn3W5AKPoK7pYDsEsFn0HE2Hc0XEyex7sjiJMeF2j2dP8FzLE8Tg4ZVxEd+8zqRV4ZzPYfYZtxSoWR8uzOWpWKQqo7kEZiAY7LtJGT3h72lfSOSVMaIFgvB9J15fpeg0HYOIWVN/dvNSn5az6jy331g3NflJX/xGGPU7pusHkKHArXqF8sQzn1JZOaGxlUdX0qDSwEZzsWfTZzaZeSYyZi+XMgyjsmLmUHIrQ9hHAGbb3YluN/7NG1XI8BMzGtmjqOYTPS5+J/szPOJQt5kq/hhDj0aZZQMw9YMiPfdEY89lKYRHWY+7zYQiN+eVhWdw2P6E8+I7OhVKKxC+aI2oYoCE8r3AHLqEYiwos7aYUnxYvP91ZGqpvupQvZtfPudP4LFSxFaY8wFsfOKDXtp+7Gq3mmc/LYKp/NIX0rb1+Kcg2dBZkK4LntXUwJotXsPxGCzALE6vm1vWV4vqdp9alOL5P8kMQZ7EPgF7NWW23agerhYrcxQdEhsk7kQkvKjUdR4Rr5RV4mBdfCj7UI9SiK+RzEW9IKcaUSb8GIVO7NOGkTdHqNu4DTvYsXd8S13xgQIuApX7DVUIBaBmKNAleMCS2 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: shmem_writeout only needs the swap_iocb cookie and the split folio list. Pass those explicitly and remove the now unused list member from struct writeback_control. Signed-off-by: Christoph Hellwig --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 2 +- drivers/gpu/drm/ttm/ttm_backup.c | 9 +------- include/linux/shmem_fs.h | 5 ++++- include/linux/writeback.h | 3 --- mm/shmem.c | 25 ++++++++++++++--------- mm/vmscan.c | 12 +++++------ 6 files changed, 26 insertions(+), 30 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 19a3eb82dc6a..24d8daa4fdb3 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -317,7 +317,7 @@ void __shmem_writeback(size_t size, struct address_space *mapping) if (folio_mapped(folio)) folio_redirty_for_writepage(&wbc, folio); else - error = shmem_writeout(folio, &wbc); + error = shmem_writeout(folio, NULL, NULL); } } diff --git a/drivers/gpu/drm/ttm/ttm_backup.c b/drivers/gpu/drm/ttm/ttm_backup.c index ffaab68bd5dd..6f2e58be4f3e 100644 --- a/drivers/gpu/drm/ttm/ttm_backup.c +++ b/drivers/gpu/drm/ttm/ttm_backup.c @@ -112,15 +112,8 @@ ttm_backup_backup_page(struct file *backup, struct page *page, if (writeback && !folio_mapped(to_folio) && folio_clear_dirty_for_io(to_folio)) { - struct writeback_control wbc = { - .sync_mode = WB_SYNC_NONE, - .nr_to_write = SWAP_CLUSTER_MAX, - .range_start = 0, - .range_end = LLONG_MAX, - .for_reclaim = 1, - }; folio_set_reclaim(to_folio); - ret = shmem_writeout(to_folio, &wbc); + ret = shmem_writeout(to_folio, NULL, NULL); if (!folio_test_writeback(to_folio)) folio_clear_reclaim(to_folio); /* diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 5f03a39a26f7..6d0f9c599ff7 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -11,6 +11,8 @@ #include #include +struct swap_iocb; + /* inode in-kernel data */ #ifdef CONFIG_TMPFS_QUOTA @@ -107,7 +109,8 @@ static inline bool shmem_mapping(struct address_space *mapping) void shmem_unlock_mapping(struct address_space *mapping); struct page *shmem_read_mapping_page_gfp(struct address_space *mapping, pgoff_t index, gfp_t gfp_mask); -int shmem_writeout(struct folio *folio, struct writeback_control *wbc); +int shmem_writeout(struct folio *folio, struct swap_iocb **plug, + struct list_head *folio_list); void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end); int shmem_unuse(unsigned int type); diff --git a/include/linux/writeback.h b/include/linux/writeback.h index eda4b62511f7..82f217970092 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -79,9 +79,6 @@ struct writeback_control { */ struct swap_iocb **swap_plug; - /* Target list for splitting a large folio */ - struct list_head *list; - /* internal fields used by the ->writepages implementation: */ struct folio_batch fbatch; pgoff_t index; diff --git a/mm/shmem.c b/mm/shmem.c index 858cee02ca49..941b9b29e78a 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1540,10 +1540,13 @@ int shmem_unuse(unsigned int type) * shmem_writeout - Write the folio to swap * @folio: The folio to write * @wbc: How writeback is to be done + * @plug: swap plug + * @folio_list: list to put back folios on split * * Move the folio from the page cache to the swap cache. */ -int shmem_writeout(struct folio *folio, struct writeback_control *wbc) +int shmem_writeout(struct folio *folio, struct swap_iocb **plug, + struct list_head *folio_list) { struct address_space *mapping = folio->mapping; struct inode *inode = mapping->host; @@ -1553,9 +1556,6 @@ int shmem_writeout(struct folio *folio, struct writeback_control *wbc) int nr_pages; bool split = false; - if (WARN_ON_ONCE(!wbc->for_reclaim)) - goto redirty; - if ((info->flags & VM_LOCKED) || sbinfo->noswap) goto redirty; @@ -1582,7 +1582,7 @@ int shmem_writeout(struct folio *folio, struct writeback_control *wbc) try_split: /* Ensure the subpages are still dirty */ folio_test_set_dirty(folio); - if (split_folio_to_list(folio, wbc->list)) + if (split_folio_to_list(folio, folio_list)) goto redirty; folio_clear_dirty(folio); } @@ -1635,13 +1635,21 @@ int shmem_writeout(struct folio *folio, struct writeback_control *wbc) list_add(&info->swaplist, &shmem_swaplist); if (!folio_alloc_swap(folio, __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN)) { + struct writeback_control wbc = { + .sync_mode = WB_SYNC_NONE, + .nr_to_write = SWAP_CLUSTER_MAX, + .range_start = 0, + .range_end = LLONG_MAX, + .for_reclaim = 1, + .swap_plug = plug, + }; shmem_recalc_inode(inode, 0, nr_pages); swap_shmem_alloc(folio->swap, nr_pages); shmem_delete_from_page_cache(folio, swp_to_radix_entry(folio->swap)); mutex_unlock(&shmem_swaplist_mutex); BUG_ON(folio_mapped(folio)); - return swap_writeout(folio, wbc); + return swap_writeout(folio, &wbc); } list_del_init(&info->swaplist); @@ -1650,10 +1658,7 @@ int shmem_writeout(struct folio *folio, struct writeback_control *wbc) goto try_split; redirty: folio_mark_dirty(folio); - if (wbc->for_reclaim) - return AOP_WRITEPAGE_ACTIVATE; /* Return with folio locked */ - folio_unlock(folio); - return 0; + return AOP_WRITEPAGE_ACTIVATE; /* Return with folio locked */ } EXPORT_SYMBOL_GPL(shmem_writeout); diff --git a/mm/vmscan.c b/mm/vmscan.c index 52e6eee4d896..2cf954006d6d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -669,15 +669,13 @@ static pageout_t writeout(struct folio *folio, struct address_space *mapping, /* * The large shmem folio can be split if CONFIG_THP_SWAP is not enabled - * or we failed to allocate contiguous swap entries. + * or we failed to allocate contiguous swap entries, in which case + * the split out folios get added back to folio_list. */ - if (shmem_mapping(mapping)) { - if (folio_test_large(folio)) - wbc.list = folio_list; - res = shmem_writeout(folio, &wbc); - } else { + if (shmem_mapping(mapping)) + res = shmem_writeout(folio, plug, folio_list); + else res = swap_writeout(folio, &wbc); - } if (res < 0) handle_write_error(mapping, folio, res); -- 2.47.2