From: Jens Axboe <axboe@kernel.dk>
To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org
Cc: hannes@cmpxchg.org, clm@meta.com, linux-kernel@vger.kernel.org,
willy@infradead.org, kirill@shutemov.name, bfoster@redhat.com,
Jens Axboe <axboe@kernel.dk>
Subject: [PATCH 06/12] mm/truncate: add folio_unmap_invalidate() helper
Date: Fri, 20 Dec 2024 08:47:44 -0700 [thread overview]
Message-ID: <20241220154831.1086649-7-axboe@kernel.dk> (raw)
In-Reply-To: <20241220154831.1086649-1-axboe@kernel.dk>
Add a folio_unmap_invalidate() helper, which unmaps and invalidates a
given folio. The caller must already have locked the folio. Embed the
old invalidate_complete_folio2() helper in there as well, as nobody else
calls it.
Use this new helper in invalidate_inode_pages2_range(), rather than
duplicate the code there.
In preparation for using this elsewhere as well, have it take a gfp_t
mask rather than assume GFP_KERNEL is the right choice. This bubbles
back to invalidate_complete_folio2() as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
mm/internal.h | 2 ++
mm/truncate.c | 53 +++++++++++++++++++++++++++------------------------
2 files changed, 30 insertions(+), 25 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index cb8d8e8e3ffa..ed3c3690eb03 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -392,6 +392,8 @@ void unmap_page_range(struct mmu_gather *tlb,
struct vm_area_struct *vma,
unsigned long addr, unsigned long end,
struct zap_details *details);
+int folio_unmap_invalidate(struct address_space *mapping, struct folio *folio,
+ gfp_t gfp);
void page_cache_ra_order(struct readahead_control *, struct file_ra_state *,
unsigned int order);
diff --git a/mm/truncate.c b/mm/truncate.c
index 7c304d2f0052..e2e115adfbc5 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -525,6 +525,15 @@ unsigned long invalidate_mapping_pages(struct address_space *mapping,
}
EXPORT_SYMBOL(invalidate_mapping_pages);
+static int folio_launder(struct address_space *mapping, struct folio *folio)
+{
+ if (!folio_test_dirty(folio))
+ return 0;
+ if (folio->mapping != mapping || mapping->a_ops->launder_folio == NULL)
+ return 0;
+ return mapping->a_ops->launder_folio(folio);
+}
+
/*
* This is like mapping_evict_folio(), except it ignores the folio's
* refcount. We do this because invalidate_inode_pages2() needs stronger
@@ -532,14 +541,26 @@ EXPORT_SYMBOL(invalidate_mapping_pages);
* shrink_folio_list() has a temp ref on them, or because they're transiently
* sitting in the folio_add_lru() caches.
*/
-static int invalidate_complete_folio2(struct address_space *mapping,
- struct folio *folio)
+int folio_unmap_invalidate(struct address_space *mapping, struct folio *folio,
+ gfp_t gfp)
{
- if (folio->mapping != mapping)
- return 0;
+ int ret;
+
+ VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
- if (!filemap_release_folio(folio, GFP_KERNEL))
+ if (folio_test_dirty(folio))
return 0;
+ if (folio_mapped(folio))
+ unmap_mapping_folio(folio);
+ BUG_ON(folio_mapped(folio));
+
+ ret = folio_launder(mapping, folio);
+ if (ret)
+ return ret;
+ if (folio->mapping != mapping)
+ return -EBUSY;
+ if (!filemap_release_folio(folio, gfp))
+ return -EBUSY;
spin_lock(&mapping->host->i_lock);
xa_lock_irq(&mapping->i_pages);
@@ -558,16 +579,7 @@ static int invalidate_complete_folio2(struct address_space *mapping,
failed:
xa_unlock_irq(&mapping->i_pages);
spin_unlock(&mapping->host->i_lock);
- return 0;
-}
-
-static int folio_launder(struct address_space *mapping, struct folio *folio)
-{
- if (!folio_test_dirty(folio))
- return 0;
- if (folio->mapping != mapping || mapping->a_ops->launder_folio == NULL)
- return 0;
- return mapping->a_ops->launder_folio(folio);
+ return -EBUSY;
}
/**
@@ -631,16 +643,7 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
}
VM_BUG_ON_FOLIO(!folio_contains(folio, indices[i]), folio);
folio_wait_writeback(folio);
-
- if (folio_mapped(folio))
- unmap_mapping_folio(folio);
- BUG_ON(folio_mapped(folio));
-
- ret2 = folio_launder(mapping, folio);
- if (ret2 == 0) {
- if (!invalidate_complete_folio2(mapping, folio))
- ret2 = -EBUSY;
- }
+ ret2 = folio_unmap_invalidate(mapping, folio, GFP_KERNEL);
if (ret2 < 0)
ret = ret2;
folio_unlock(folio);
--
2.45.2
next prev parent reply other threads:[~2024-12-20 15:48 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-20 15:47 [PATCHSET v8 0/12] Uncached buffered IO Jens Axboe
2024-12-20 15:47 ` [PATCH 01/12] mm/filemap: change filemap_create_folio() to take a struct kiocb Jens Axboe
2024-12-20 16:11 ` Matthew Wilcox
2024-12-20 15:47 ` [PATCH 02/12] mm/filemap: use page_cache_sync_ra() to kick off read-ahead Jens Axboe
2024-12-20 16:12 ` Matthew Wilcox
2024-12-20 15:47 ` [PATCH 03/12] mm/readahead: add folio allocation helper Jens Axboe
2024-12-20 16:12 ` Matthew Wilcox
2024-12-20 15:47 ` [PATCH 04/12] mm: add PG_dropbehind folio flag Jens Axboe
2024-12-20 15:47 ` [PATCH 05/12] mm/readahead: add readahead_control->dropbehind member Jens Axboe
2024-12-20 15:47 ` Jens Axboe [this message]
2024-12-20 16:21 ` [PATCH 06/12] mm/truncate: add folio_unmap_invalidate() helper Matthew Wilcox
2024-12-20 16:28 ` Jens Axboe
2025-01-02 20:12 ` Jens Axboe
2024-12-20 15:47 ` [PATCH 07/12] fs: add RWF_DONTCACHE iocb and FOP_DONTCACHE file_operations flag Jens Axboe
2025-01-04 8:39 ` (subset) " Christian Brauner
2025-01-06 15:44 ` Jens Axboe
2024-12-20 15:47 ` [PATCH 08/12] mm/filemap: add read support for RWF_DONTCACHE Jens Axboe
2024-12-20 15:47 ` [PATCH 09/12] mm/filemap: drop streaming/uncached pages when writeback completes Jens Axboe
2025-01-18 3:29 ` Jingbo Xu
2025-03-04 3:12 ` Ritesh Harjani
2024-12-20 15:47 ` [PATCH 10/12] mm/filemap: add filemap_fdatawrite_range_kick() helper Jens Axboe
2025-01-18 3:25 ` Jingbo Xu
2024-12-20 15:47 ` [PATCH 11/12] mm: call filemap_fdatawrite_range_kick() after IOCB_DONTCACHE issue Jens Axboe
2024-12-20 15:47 ` [PATCH 12/12] mm: add FGP_DONTCACHE folio creation flag Jens Axboe
2025-01-08 3:35 ` [PATCHSET v8 0/12] Uncached buffered IO Andrew Morton
2025-01-13 15:34 ` Jens Axboe
2025-01-14 0:46 ` Andrew Morton
2025-01-14 0:56 ` Jens Axboe
2025-01-16 10:06 ` Kirill A. Shutemov
-- strict thread matches above, loose matches on Subject: below --
2024-12-03 15:31 [PATCHSET v6 " Jens Axboe
2024-12-03 15:31 ` [PATCH 06/12] mm/truncate: add folio_unmap_invalidate() helper Jens Axboe
2024-12-10 11:21 ` Christoph Hellwig
2024-12-12 20:19 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241220154831.1086649-7-axboe@kernel.dk \
--to=axboe@kernel.dk \
--cc=bfoster@redhat.com \
--cc=clm@meta.com \
--cc=hannes@cmpxchg.org \
--cc=kirill@shutemov.name \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox