* [PATCH v5 0/5] fuse: remove temp page copies in writeback
@ 2024-11-15 22:44 Joanne Koong
2024-11-15 22:44 ` [PATCH v5 1/5] mm: add AS_WRITEBACK_INDETERMINATE mapping flag Joanne Koong
` (4 more replies)
0 siblings, 5 replies; 15+ messages in thread
From: Joanne Koong @ 2024-11-15 22:44 UTC (permalink / raw)
To: miklos, linux-fsdevel
Cc: shakeel.butt, jefflexu, josef, linux-mm, bernd.schubert, kernel-team
The purpose of this patchset is to help make writeback-cache write
performance in FUSE filesystems as fast as possible.
In the current FUSE writeback design (see commit 3be5a52b30aa
("fuse: support writable mmap"))), a temp page is allocated for every dirty
page to be written back, the contents of the dirty page are copied over to the
temp page, and the temp page gets handed to the server to write back. This is
done so that writeback may be immediately cleared on the dirty page, and this
in turn is done for two reasons:
a) in order to mitigate the following deadlock scenario that may arise if
reclaim waits on writeback on the dirty page to complete (more details can be
found in this thread [1]):
* single-threaded FUSE server is in the middle of handling a request
that needs a memory allocation
* memory allocation triggers direct reclaim
* direct reclaim waits on a folio under writeback
* the FUSE server can't write back the folio since it's stuck in
direct reclaim
b) in order to unblock internal (eg sync, page compaction) waits on writeback
without needing the server to complete writing back to disk, which may take
an indeterminate amount of time.
Allocating and copying dirty pages to temp pages is the biggest performance
bottleneck for FUSE writeback. This patchset aims to get rid of the temp page
altogether (which will also allow us to get rid of the internal FUSE rb tree
that is needed to keep track of writeback status on the temp pages).
Benchmarks show approximately a 20% improvement in throughput for 4k
block-size writes and a 45% improvement for 1M block-size writes.
With removing the temp page, writeback state is now only cleared on the dirty
page after the server has written it back to disk. This may take an
indeterminate amount of time. As well, there is also the possibility of
malicious or well-intentioned but buggy servers where writeback may in the
worst case scenario, never complete. This means that any
folio_wait_writeback() on a dirty page belonging to a FUSE filesystem needs to
be carefully audited.
In particular, these are the cases that need to be accounted for:
* potentially deadlocking in reclaim, as mentioned above
* potentially stalling sync(2)
* potentially stalling page migration / compaction
This patchset adds a new mapping flag, AS_WRITEBACK_INDETERMINATE, which
filesystems may set on its inode mappings to indicate that writeback
operations may take an indeterminate amount of time to complete. FUSE will set
this flag on its mappings. This patchset adds checks to the critical parts of
reclaim, sync, and page migration logic where writeback may be waited on.
Please note the following:
* For sync(2), waiting on writeback will be skipped for FUSE, but this has no
effect on existing behavior. Dirty FUSE pages are already not guaranteed to
be written to disk by the time sync(2) returns (eg writeback is cleared on
the dirty page but the server may not have written out the temp page to disk
yet). If the caller wishes to ensure the data has actually been synced to
disk, they should use fsync(2)/fdatasync(2) instead.
* AS_WRITEBACK_INDETERMINATE does not indicate that the folios should never be
waited on when in writeback. There are some cases where the wait is
desirable. For example, for the sync_file_range() syscall, it is fine to
wait on the writeback since the caller passes in a fd for the operation.
[1]
https://lore.kernel.org/linux-kernel/495d2400-1d96-4924-99d3-8b2952e05fc3@linux.alibaba.com/
Changelog
---------
v4:
https://lore.kernel.org/linux-fsdevel/20241107235614.3637221-1-joannelkoong@gmail.com/
Changes from v4 -> v5:
* AS_WRITEBACK_MAY_BLOCK -> AS_WRITEBACK_INDETERMINATE (Shakeel)
* Drop memory hotplug patch (David and Shakeel)
* Remove some more kunnecessary writeback waits in fuse code (Jingbo)
* Make commit message for reclaim patch more concise - drop part about deadlock and just
focus on how it may stall waits
v3:
https://lore.kernel.org/linux-fsdevel/20241107191618.2011146-1-joannelkoong@gmail.com/
Changes from v3 -> v4:
* Use filemap_fdatawait_range() instead of filemap_range_has_writeback() in
readahead
v2:
https://lore.kernel.org/linux-fsdevel/20241014182228.1941246-1-joannelkoong@gmail.com/
Changes from v2 -> v3:
* Account for sync and page migration cases as well (Miklos)
* Change AS_NO_WRITEBACK_RECLAIM to the more generic AS_WRITEBACK_MAY_BLOCK
* For fuse inodes, set mapping_writeback_may_block only if fc->writeback_cache
is enabled
v1:
https://lore.kernel.org/linux-fsdevel/20241011223434.1307300-1-joannelkoong@gmail.com/T/#t
Changes from v1 -> v2:
* Have flag in "enum mapping_flags" instead of creating asop_flags (Shakeel)
* Set fuse inodes to use AS_NO_WRITEBACK_RECLAIM (Shakeel)
Joanne Koong (5):
mm: add AS_WRITEBACK_INDETERMINATE mapping flag
mm: skip reclaiming folios in legacy memcg writeback indeterminate
contexts
fs/writeback: in wait_sb_inodes(), skip wait for
AS_WRITEBACK_INDETERMINATE mappings
mm/migrate: skip migrating folios under writeback with
AS_WRITEBACK_INDETERMINATE mappings
fuse: remove tmp folio for writebacks and internal rb tree
fs/fs-writeback.c | 3 +
fs/fuse/file.c | 339 +++-------------------------------------
include/linux/pagemap.h | 11 ++
mm/migrate.c | 5 +-
mm/vmscan.c | 10 +-
5 files changed, 45 insertions(+), 323 deletions(-)
--
2.43.5
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v5 1/5] mm: add AS_WRITEBACK_INDETERMINATE mapping flag
2024-11-15 22:44 [PATCH v5 0/5] fuse: remove temp page copies in writeback Joanne Koong
@ 2024-11-15 22:44 ` Joanne Koong
2024-11-15 23:11 ` Shakeel Butt
2024-11-15 22:44 ` [PATCH v5 2/5] mm: skip reclaiming folios in legacy memcg writeback indeterminate contexts Joanne Koong
` (3 subsequent siblings)
4 siblings, 1 reply; 15+ messages in thread
From: Joanne Koong @ 2024-11-15 22:44 UTC (permalink / raw)
To: miklos, linux-fsdevel
Cc: shakeel.butt, jefflexu, josef, linux-mm, bernd.schubert, kernel-team
Add a new mapping flag AS_WRITEBACK_INDETERMINATE which filesystems may
set to indicate that writing back to disk may take an indeterminate
amount of time to complete. Extra caution should be taken when waiting
on writeback for folios belonging to mappings where this flag is set.
Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
---
include/linux/pagemap.h | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 68a5f1ff3301..fcf7d4dd7e2b 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -210,6 +210,7 @@ enum mapping_flags {
AS_STABLE_WRITES = 7, /* must wait for writeback before modifying
folio contents */
AS_INACCESSIBLE = 8, /* Do not attempt direct R/W access to the mapping */
+ AS_WRITEBACK_INDETERMINATE = 9, /* Use caution when waiting on writeback */
/* Bits 16-25 are used for FOLIO_ORDER */
AS_FOLIO_ORDER_BITS = 5,
AS_FOLIO_ORDER_MIN = 16,
@@ -335,6 +336,16 @@ static inline bool mapping_inaccessible(struct address_space *mapping)
return test_bit(AS_INACCESSIBLE, &mapping->flags);
}
+static inline void mapping_set_writeback_indeterminate(struct address_space *mapping)
+{
+ set_bit(AS_WRITEBACK_INDETERMINATE, &mapping->flags);
+}
+
+static inline bool mapping_writeback_indeterminate(struct address_space *mapping)
+{
+ return test_bit(AS_WRITEBACK_INDETERMINATE, &mapping->flags);
+}
+
static inline gfp_t mapping_gfp_mask(struct address_space * mapping)
{
return mapping->gfp_mask;
--
2.43.5
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v5 2/5] mm: skip reclaiming folios in legacy memcg writeback indeterminate contexts
2024-11-15 22:44 [PATCH v5 0/5] fuse: remove temp page copies in writeback Joanne Koong
2024-11-15 22:44 ` [PATCH v5 1/5] mm: add AS_WRITEBACK_INDETERMINATE mapping flag Joanne Koong
@ 2024-11-15 22:44 ` Joanne Koong
2024-11-15 22:44 ` [PATCH v5 3/5] fs/writeback: in wait_sb_inodes(), skip wait for AS_WRITEBACK_INDETERMINATE mappings Joanne Koong
` (2 subsequent siblings)
4 siblings, 0 replies; 15+ messages in thread
From: Joanne Koong @ 2024-11-15 22:44 UTC (permalink / raw)
To: miklos, linux-fsdevel
Cc: shakeel.butt, jefflexu, josef, linux-mm, bernd.schubert, kernel-team
Currently in shrink_folio_list(), reclaim for folios under writeback
falls into 3 different cases:
1) Reclaim is encountering an excessive number of folios under
writeback and this folio has both the writeback and reclaim flags
set
2) Dirty throttling is enabled (this happens if reclaim through cgroup
is not enabled, if reclaim through cgroupv2 memcg is enabled, or
if reclaim is on the root cgroup), or if the folio is not marked for
immediate reclaim, or if the caller does not have __GFP_FS (or
__GFP_IO if it's going to swap) set
3) Legacy cgroupv1 encounters a folio that already has the reclaim flag
set and the caller did not have __GFP_FS (or __GFP_IO if swap) set
In cases 1) and 2), we activate the folio and skip reclaiming it while
in case 3), we wait for writeback to finish on the folio and then try
to reclaim the folio again. In case 3, we wait on writeback because
cgroupv1 does not have dirty folio throttling, as such this is a
mitigation against the case where there are too many folios in writeback
with nothing else to reclaim.
For filesystems where writeback may take an indeterminate amount of time
to write to disk, this has the possibility of stalling reclaim.
In this commit, if legacy memcg encounters a folio with the reclaim flag
set (eg case 3) and the folio belongs to a mapping that has the
AS_WRITEBACK_INDETERMINATE flag set, the folio will be activated and skip
reclaim (eg default to behavior in case 2) instead.
Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
---
mm/vmscan.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 749cdc110c74..37ce6b6dac06 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1129,8 +1129,9 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
* 2) Global or new memcg reclaim encounters a folio that is
* not marked for immediate reclaim, or the caller does not
* have __GFP_FS (or __GFP_IO if it's simply going to swap,
- * not to fs). In this case mark the folio for immediate
- * reclaim and continue scanning.
+ * not to fs), or the writeback may take an indeterminate
+ * amount of time to complete. In this case mark the folio
+ * for immediate reclaim and continue scanning.
*
* Require may_enter_fs() because we would wait on fs, which
* may not have submitted I/O yet. And the loop driver might
@@ -1155,6 +1156,8 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
* takes to write them to disk.
*/
if (folio_test_writeback(folio)) {
+ mapping = folio_mapping(folio);
+
/* Case 1 above */
if (current_is_kswapd() &&
folio_test_reclaim(folio) &&
@@ -1165,7 +1168,8 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
/* Case 2 above */
} else if (writeback_throttling_sane(sc) ||
!folio_test_reclaim(folio) ||
- !may_enter_fs(folio, sc->gfp_mask)) {
+ !may_enter_fs(folio, sc->gfp_mask) ||
+ (mapping && mapping_writeback_indeterminate(mapping))) {
/*
* This is slightly racy -
* folio_end_writeback() might have
--
2.43.5
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v5 3/5] fs/writeback: in wait_sb_inodes(), skip wait for AS_WRITEBACK_INDETERMINATE mappings
2024-11-15 22:44 [PATCH v5 0/5] fuse: remove temp page copies in writeback Joanne Koong
2024-11-15 22:44 ` [PATCH v5 1/5] mm: add AS_WRITEBACK_INDETERMINATE mapping flag Joanne Koong
2024-11-15 22:44 ` [PATCH v5 2/5] mm: skip reclaiming folios in legacy memcg writeback indeterminate contexts Joanne Koong
@ 2024-11-15 22:44 ` Joanne Koong
2024-11-20 12:07 ` Jingbo Xu
2024-11-15 22:44 ` [PATCH v5 4/5] mm/migrate: skip migrating folios under writeback with " Joanne Koong
2024-11-15 22:44 ` [PATCH v5 5/5] fuse: remove tmp folio for writebacks and internal rb tree Joanne Koong
4 siblings, 1 reply; 15+ messages in thread
From: Joanne Koong @ 2024-11-15 22:44 UTC (permalink / raw)
To: miklos, linux-fsdevel
Cc: shakeel.butt, jefflexu, josef, linux-mm, bernd.schubert, kernel-team
For filesystems with the AS_WRITEBACK_INDETERMINATE flag set, writeback
operations may take an indeterminate time to complete. For example, writing
data back to disk in FUSE filesystems depends on the userspace server
successfully completing writeback.
In this commit, wait_sb_inodes() skips waiting on writeback if the
inode's mapping has AS_WRITEBACK_INDETERMINATE set, else sync(2) may take an
indeterminate amount of time to complete.
If the caller wishes to ensure the data for a mapping with the
AS_WRITEBACK_INDETERMINATE flag set has actually been written back to disk,
they should use fsync(2)/fdatasync(2) instead.
Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
---
fs/fs-writeback.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index d8bec3c1bb1f..ad192db17ce4 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -2659,6 +2659,9 @@ static void wait_sb_inodes(struct super_block *sb)
if (!mapping_tagged(mapping, PAGECACHE_TAG_WRITEBACK))
continue;
+ if (mapping_writeback_indeterminate(mapping))
+ continue;
+
spin_unlock_irq(&sb->s_inode_wblist_lock);
spin_lock(&inode->i_lock);
--
2.43.5
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v5 4/5] mm/migrate: skip migrating folios under writeback with AS_WRITEBACK_INDETERMINATE mappings
2024-11-15 22:44 [PATCH v5 0/5] fuse: remove temp page copies in writeback Joanne Koong
` (2 preceding siblings ...)
2024-11-15 22:44 ` [PATCH v5 3/5] fs/writeback: in wait_sb_inodes(), skip wait for AS_WRITEBACK_INDETERMINATE mappings Joanne Koong
@ 2024-11-15 22:44 ` Joanne Koong
2024-11-15 23:12 ` Shakeel Butt
2024-11-15 22:44 ` [PATCH v5 5/5] fuse: remove tmp folio for writebacks and internal rb tree Joanne Koong
4 siblings, 1 reply; 15+ messages in thread
From: Joanne Koong @ 2024-11-15 22:44 UTC (permalink / raw)
To: miklos, linux-fsdevel
Cc: shakeel.butt, jefflexu, josef, linux-mm, bernd.schubert, kernel-team
For migrations called in MIGRATE_SYNC mode, skip migrating the folio if
it is under writeback and has the AS_WRITEBACK_INDETERMINATE flag set on its
mapping. If the AS_WRITEBACK_INDETERMINATE flag is set on the mapping, the
writeback may take an indeterminate amount of time to complete, and
waits may get stuck.
Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
---
mm/migrate.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index df91248755e4..fe73284e5246 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1260,7 +1260,10 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
*/
switch (mode) {
case MIGRATE_SYNC:
- break;
+ if (!src->mapping ||
+ !mapping_writeback_indeterminate(src->mapping))
+ break;
+ fallthrough;
default:
rc = -EBUSY;
goto out;
--
2.43.5
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v5 5/5] fuse: remove tmp folio for writebacks and internal rb tree
2024-11-15 22:44 [PATCH v5 0/5] fuse: remove temp page copies in writeback Joanne Koong
` (3 preceding siblings ...)
2024-11-15 22:44 ` [PATCH v5 4/5] mm/migrate: skip migrating folios under writeback with " Joanne Koong
@ 2024-11-15 22:44 ` Joanne Koong
2024-11-19 7:59 ` Jingbo Xu
2024-11-20 9:56 ` Jingbo Xu
4 siblings, 2 replies; 15+ messages in thread
From: Joanne Koong @ 2024-11-15 22:44 UTC (permalink / raw)
To: miklos, linux-fsdevel
Cc: shakeel.butt, jefflexu, josef, linux-mm, bernd.schubert, kernel-team
In the current FUSE writeback design (see commit 3be5a52b30aa
("fuse: support writable mmap")), a temp page is allocated for every
dirty page to be written back, the contents of the dirty page are copied over
to the temp page, and the temp page gets handed to the server to write back.
This is done so that writeback may be immediately cleared on the dirty page,
and this in turn is done for two reasons:
a) in order to mitigate the following deadlock scenario that may arise
if reclaim waits on writeback on the dirty page to complete:
* single-threaded FUSE server is in the middle of handling a request
that needs a memory allocation
* memory allocation triggers direct reclaim
* direct reclaim waits on a folio under writeback
* the FUSE server can't write back the folio since it's stuck in
direct reclaim
b) in order to unblock internal (eg sync, page compaction) waits on
writeback without needing the server to complete writing back to disk,
which may take an indeterminate amount of time.
With a recent change that added AS_WRITEBACK_INDETERMINATE and mitigates
the situations described above, FUSE writeback does not need to use
temp pages if it sets AS_WRITEBACK_INDETERMINATE on its inode mappings.
This commit sets AS_WRITEBACK_INDETERMINATE on the inode mappings
and removes the temporary pages + extra copying and the internal rb
tree.
fio benchmarks --
(using averages observed from 10 runs, throwing away outliers)
Setup:
sudo mount -t tmpfs -o size=30G tmpfs ~/tmp_mount
./libfuse/build/example/passthrough_ll -o writeback -o max_threads=4 -o source=~/tmp_mount ~/fuse_mount
fio --name=writeback --ioengine=sync --rw=write --bs={1k,4k,1M} --size=2G
--numjobs=2 --ramp_time=30 --group_reporting=1 --directory=/root/fuse_mount
bs = 1k 4k 1M
Before 351 MiB/s 1818 MiB/s 1851 MiB/s
After 341 MiB/s 2246 MiB/s 2685 MiB/s
% diff -3% 23% 45%
Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
---
fs/fuse/file.c | 339 +++----------------------------------------------
1 file changed, 20 insertions(+), 319 deletions(-)
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index 88d0946b5bc9..56289ac58596 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -415,89 +415,11 @@ u64 fuse_lock_owner_id(struct fuse_conn *fc, fl_owner_t id)
struct fuse_writepage_args {
struct fuse_io_args ia;
- struct rb_node writepages_entry;
struct list_head queue_entry;
- struct fuse_writepage_args *next;
struct inode *inode;
struct fuse_sync_bucket *bucket;
};
-static struct fuse_writepage_args *fuse_find_writeback(struct fuse_inode *fi,
- pgoff_t idx_from, pgoff_t idx_to)
-{
- struct rb_node *n;
-
- n = fi->writepages.rb_node;
-
- while (n) {
- struct fuse_writepage_args *wpa;
- pgoff_t curr_index;
-
- wpa = rb_entry(n, struct fuse_writepage_args, writepages_entry);
- WARN_ON(get_fuse_inode(wpa->inode) != fi);
- curr_index = wpa->ia.write.in.offset >> PAGE_SHIFT;
- if (idx_from >= curr_index + wpa->ia.ap.num_folios)
- n = n->rb_right;
- else if (idx_to < curr_index)
- n = n->rb_left;
- else
- return wpa;
- }
- return NULL;
-}
-
-/*
- * Check if any page in a range is under writeback
- */
-static bool fuse_range_is_writeback(struct inode *inode, pgoff_t idx_from,
- pgoff_t idx_to)
-{
- struct fuse_inode *fi = get_fuse_inode(inode);
- bool found;
-
- if (RB_EMPTY_ROOT(&fi->writepages))
- return false;
-
- spin_lock(&fi->lock);
- found = fuse_find_writeback(fi, idx_from, idx_to);
- spin_unlock(&fi->lock);
-
- return found;
-}
-
-static inline bool fuse_page_is_writeback(struct inode *inode, pgoff_t index)
-{
- return fuse_range_is_writeback(inode, index, index);
-}
-
-/*
- * Wait for page writeback to be completed.
- *
- * Since fuse doesn't rely on the VM writeback tracking, this has to
- * use some other means.
- */
-static void fuse_wait_on_page_writeback(struct inode *inode, pgoff_t index)
-{
- struct fuse_inode *fi = get_fuse_inode(inode);
-
- wait_event(fi->page_waitq, !fuse_page_is_writeback(inode, index));
-}
-
-static inline bool fuse_folio_is_writeback(struct inode *inode,
- struct folio *folio)
-{
- pgoff_t last = folio_next_index(folio) - 1;
- return fuse_range_is_writeback(inode, folio_index(folio), last);
-}
-
-static void fuse_wait_on_folio_writeback(struct inode *inode,
- struct folio *folio)
-{
- struct fuse_inode *fi = get_fuse_inode(inode);
-
- wait_event(fi->page_waitq, !fuse_folio_is_writeback(inode, folio));
-}
-
/*
* Wait for all pending writepages on the inode to finish.
*
@@ -886,13 +808,6 @@ static int fuse_do_readfolio(struct file *file, struct folio *folio)
ssize_t res;
u64 attr_ver;
- /*
- * With the temporary pages that are used to complete writeback, we can
- * have writeback that extends beyond the lifetime of the folio. So
- * make sure we read a properly synced folio.
- */
- fuse_wait_on_folio_writeback(inode, folio);
-
attr_ver = fuse_get_attr_version(fm->fc);
/* Don't overflow end offset */
@@ -1003,17 +918,12 @@ static void fuse_send_readpages(struct fuse_io_args *ia, struct file *file)
static void fuse_readahead(struct readahead_control *rac)
{
struct inode *inode = rac->mapping->host;
- struct fuse_inode *fi = get_fuse_inode(inode);
struct fuse_conn *fc = get_fuse_conn(inode);
unsigned int max_pages, nr_pages;
- pgoff_t first = readahead_index(rac);
- pgoff_t last = first + readahead_count(rac) - 1;
if (fuse_is_bad(inode))
return;
- wait_event(fi->page_waitq, !fuse_range_is_writeback(inode, first, last));
-
max_pages = min_t(unsigned int, fc->max_pages,
fc->max_read / PAGE_SIZE);
@@ -1172,7 +1082,7 @@ static ssize_t fuse_send_write_pages(struct fuse_io_args *ia,
int err;
for (i = 0; i < ap->num_folios; i++)
- fuse_wait_on_folio_writeback(inode, ap->folios[i]);
+ folio_wait_writeback(ap->folios[i]);
fuse_write_args_fill(ia, ff, pos, count);
ia->write.in.flags = fuse_write_flags(iocb);
@@ -1622,7 +1532,7 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter,
return res;
}
}
- if (!cuse && fuse_range_is_writeback(inode, idx_from, idx_to)) {
+ if (!cuse && filemap_range_has_writeback(mapping, pos, (pos + count - 1))) {
if (!write)
inode_lock(inode);
fuse_sync_writes(inode);
@@ -1825,7 +1735,7 @@ static void fuse_writepage_free(struct fuse_writepage_args *wpa)
fuse_sync_bucket_dec(wpa->bucket);
for (i = 0; i < ap->num_folios; i++)
- folio_put(ap->folios[i]);
+ folio_end_writeback(ap->folios[i]);
fuse_file_put(wpa->ia.ff, false);
@@ -1838,7 +1748,7 @@ static void fuse_writepage_finish_stat(struct inode *inode, struct folio *folio)
struct backing_dev_info *bdi = inode_to_bdi(inode);
dec_wb_stat(&bdi->wb, WB_WRITEBACK);
- node_stat_sub_folio(folio, NR_WRITEBACK_TEMP);
+ node_stat_sub_folio(folio, NR_WRITEBACK);
wb_writeout_inc(&bdi->wb);
}
@@ -1861,7 +1771,6 @@ static void fuse_send_writepage(struct fuse_mount *fm,
__releases(fi->lock)
__acquires(fi->lock)
{
- struct fuse_writepage_args *aux, *next;
struct fuse_inode *fi = get_fuse_inode(wpa->inode);
struct fuse_write_in *inarg = &wpa->ia.write.in;
struct fuse_args *args = &wpa->ia.ap.args;
@@ -1898,19 +1807,8 @@ __acquires(fi->lock)
out_free:
fi->writectr--;
- rb_erase(&wpa->writepages_entry, &fi->writepages);
fuse_writepage_finish(wpa);
spin_unlock(&fi->lock);
-
- /* After rb_erase() aux request list is private */
- for (aux = wpa->next; aux; aux = next) {
- next = aux->next;
- aux->next = NULL;
- fuse_writepage_finish_stat(aux->inode,
- aux->ia.ap.folios[0]);
- fuse_writepage_free(aux);
- }
-
fuse_writepage_free(wpa);
spin_lock(&fi->lock);
}
@@ -1938,43 +1836,6 @@ __acquires(fi->lock)
}
}
-static struct fuse_writepage_args *fuse_insert_writeback(struct rb_root *root,
- struct fuse_writepage_args *wpa)
-{
- pgoff_t idx_from = wpa->ia.write.in.offset >> PAGE_SHIFT;
- pgoff_t idx_to = idx_from + wpa->ia.ap.num_folios - 1;
- struct rb_node **p = &root->rb_node;
- struct rb_node *parent = NULL;
-
- WARN_ON(!wpa->ia.ap.num_folios);
- while (*p) {
- struct fuse_writepage_args *curr;
- pgoff_t curr_index;
-
- parent = *p;
- curr = rb_entry(parent, struct fuse_writepage_args,
- writepages_entry);
- WARN_ON(curr->inode != wpa->inode);
- curr_index = curr->ia.write.in.offset >> PAGE_SHIFT;
-
- if (idx_from >= curr_index + curr->ia.ap.num_folios)
- p = &(*p)->rb_right;
- else if (idx_to < curr_index)
- p = &(*p)->rb_left;
- else
- return curr;
- }
-
- rb_link_node(&wpa->writepages_entry, parent, p);
- rb_insert_color(&wpa->writepages_entry, root);
- return NULL;
-}
-
-static void tree_insert(struct rb_root *root, struct fuse_writepage_args *wpa)
-{
- WARN_ON(fuse_insert_writeback(root, wpa));
-}
-
static void fuse_writepage_end(struct fuse_mount *fm, struct fuse_args *args,
int error)
{
@@ -1994,41 +1855,6 @@ static void fuse_writepage_end(struct fuse_mount *fm, struct fuse_args *args,
if (!fc->writeback_cache)
fuse_invalidate_attr_mask(inode, FUSE_STATX_MODIFY);
spin_lock(&fi->lock);
- rb_erase(&wpa->writepages_entry, &fi->writepages);
- while (wpa->next) {
- struct fuse_mount *fm = get_fuse_mount(inode);
- struct fuse_write_in *inarg = &wpa->ia.write.in;
- struct fuse_writepage_args *next = wpa->next;
-
- wpa->next = next->next;
- next->next = NULL;
- tree_insert(&fi->writepages, next);
-
- /*
- * Skip fuse_flush_writepages() to make it easy to crop requests
- * based on primary request size.
- *
- * 1st case (trivial): there are no concurrent activities using
- * fuse_set/release_nowrite. Then we're on safe side because
- * fuse_flush_writepages() would call fuse_send_writepage()
- * anyway.
- *
- * 2nd case: someone called fuse_set_nowrite and it is waiting
- * now for completion of all in-flight requests. This happens
- * rarely and no more than once per page, so this should be
- * okay.
- *
- * 3rd case: someone (e.g. fuse_do_setattr()) is in the middle
- * of fuse_set_nowrite..fuse_release_nowrite section. The fact
- * that fuse_set_nowrite returned implies that all in-flight
- * requests were completed along with all of their secondary
- * requests. Further primary requests are blocked by negative
- * writectr. Hence there cannot be any in-flight requests and
- * no invocations of fuse_writepage_end() while we're in
- * fuse_set_nowrite..fuse_release_nowrite section.
- */
- fuse_send_writepage(fm, next, inarg->offset + inarg->size);
- }
fi->writectr--;
fuse_writepage_finish(wpa);
spin_unlock(&fi->lock);
@@ -2115,19 +1941,17 @@ static void fuse_writepage_add_to_bucket(struct fuse_conn *fc,
}
static void fuse_writepage_args_page_fill(struct fuse_writepage_args *wpa, struct folio *folio,
- struct folio *tmp_folio, uint32_t folio_index)
+ uint32_t folio_index)
{
struct inode *inode = folio->mapping->host;
struct fuse_args_pages *ap = &wpa->ia.ap;
- folio_copy(tmp_folio, folio);
-
- ap->folios[folio_index] = tmp_folio;
+ ap->folios[folio_index] = folio;
ap->descs[folio_index].offset = 0;
ap->descs[folio_index].length = PAGE_SIZE;
inc_wb_stat(&inode_to_bdi(inode)->wb, WB_WRITEBACK);
- node_stat_add_folio(tmp_folio, NR_WRITEBACK_TEMP);
+ node_stat_add_folio(folio, NR_WRITEBACK);
}
static struct fuse_writepage_args *fuse_writepage_args_setup(struct folio *folio,
@@ -2162,18 +1986,12 @@ static int fuse_writepage_locked(struct folio *folio)
struct fuse_inode *fi = get_fuse_inode(inode);
struct fuse_writepage_args *wpa;
struct fuse_args_pages *ap;
- struct folio *tmp_folio;
struct fuse_file *ff;
- int error = -ENOMEM;
-
- tmp_folio = folio_alloc(GFP_NOFS | __GFP_HIGHMEM, 0);
- if (!tmp_folio)
- goto err;
+ int error = -EIO;
- error = -EIO;
ff = fuse_write_file_get(fi);
if (!ff)
- goto err_nofile;
+ goto err;
wpa = fuse_writepage_args_setup(folio, ff);
error = -ENOMEM;
@@ -2184,22 +2002,17 @@ static int fuse_writepage_locked(struct folio *folio)
ap->num_folios = 1;
folio_start_writeback(folio);
- fuse_writepage_args_page_fill(wpa, folio, tmp_folio, 0);
+ fuse_writepage_args_page_fill(wpa, folio, 0);
spin_lock(&fi->lock);
- tree_insert(&fi->writepages, wpa);
list_add_tail(&wpa->queue_entry, &fi->queued_writes);
fuse_flush_writepages(inode);
spin_unlock(&fi->lock);
- folio_end_writeback(folio);
-
return 0;
err_writepage_args:
fuse_file_put(ff, false);
-err_nofile:
- folio_put(tmp_folio);
err:
mapping_set_error(folio->mapping, error);
return error;
@@ -2209,7 +2022,6 @@ struct fuse_fill_wb_data {
struct fuse_writepage_args *wpa;
struct fuse_file *ff;
struct inode *inode;
- struct folio **orig_folios;
unsigned int max_folios;
};
@@ -2244,69 +2056,11 @@ static void fuse_writepages_send(struct fuse_fill_wb_data *data)
struct fuse_writepage_args *wpa = data->wpa;
struct inode *inode = data->inode;
struct fuse_inode *fi = get_fuse_inode(inode);
- int num_folios = wpa->ia.ap.num_folios;
- int i;
spin_lock(&fi->lock);
list_add_tail(&wpa->queue_entry, &fi->queued_writes);
fuse_flush_writepages(inode);
spin_unlock(&fi->lock);
-
- for (i = 0; i < num_folios; i++)
- folio_end_writeback(data->orig_folios[i]);
-}
-
-/*
- * Check under fi->lock if the page is under writeback, and insert it onto the
- * rb_tree if not. Otherwise iterate auxiliary write requests, to see if there's
- * one already added for a page at this offset. If there's none, then insert
- * this new request onto the auxiliary list, otherwise reuse the existing one by
- * swapping the new temp page with the old one.
- */
-static bool fuse_writepage_add(struct fuse_writepage_args *new_wpa,
- struct folio *folio)
-{
- struct fuse_inode *fi = get_fuse_inode(new_wpa->inode);
- struct fuse_writepage_args *tmp;
- struct fuse_writepage_args *old_wpa;
- struct fuse_args_pages *new_ap = &new_wpa->ia.ap;
-
- WARN_ON(new_ap->num_folios != 0);
- new_ap->num_folios = 1;
-
- spin_lock(&fi->lock);
- old_wpa = fuse_insert_writeback(&fi->writepages, new_wpa);
- if (!old_wpa) {
- spin_unlock(&fi->lock);
- return true;
- }
-
- for (tmp = old_wpa->next; tmp; tmp = tmp->next) {
- pgoff_t curr_index;
-
- WARN_ON(tmp->inode != new_wpa->inode);
- curr_index = tmp->ia.write.in.offset >> PAGE_SHIFT;
- if (curr_index == folio->index) {
- WARN_ON(tmp->ia.ap.num_folios != 1);
- swap(tmp->ia.ap.folios[0], new_ap->folios[0]);
- break;
- }
- }
-
- if (!tmp) {
- new_wpa->next = old_wpa->next;
- old_wpa->next = new_wpa;
- }
-
- spin_unlock(&fi->lock);
-
- if (tmp) {
- fuse_writepage_finish_stat(new_wpa->inode,
- folio);
- fuse_writepage_free(new_wpa);
- }
-
- return false;
}
static bool fuse_writepage_need_send(struct fuse_conn *fc, struct folio *folio,
@@ -2315,15 +2069,6 @@ static bool fuse_writepage_need_send(struct fuse_conn *fc, struct folio *folio,
{
WARN_ON(!ap->num_folios);
- /*
- * Being under writeback is unlikely but possible. For example direct
- * read to an mmaped fuse file will set the page dirty twice; once when
- * the pages are faulted with get_user_pages(), and then after the read
- * completed.
- */
- if (fuse_folio_is_writeback(data->inode, folio))
- return true;
-
/* Reached max pages */
if (ap->num_folios == fc->max_pages)
return true;
@@ -2333,7 +2078,7 @@ static bool fuse_writepage_need_send(struct fuse_conn *fc, struct folio *folio,
return true;
/* Discontinuity */
- if (data->orig_folios[ap->num_folios - 1]->index + 1 != folio_index(folio))
+ if (ap->folios[ap->num_folios - 1]->index + 1 != folio_index(folio))
return true;
/* Need to grow the pages array? If so, did the expansion fail? */
@@ -2352,7 +2097,6 @@ static int fuse_writepages_fill(struct folio *folio,
struct inode *inode = data->inode;
struct fuse_inode *fi = get_fuse_inode(inode);
struct fuse_conn *fc = get_fuse_conn(inode);
- struct folio *tmp_folio;
int err;
if (!data->ff) {
@@ -2367,54 +2111,23 @@ static int fuse_writepages_fill(struct folio *folio,
data->wpa = NULL;
}
- err = -ENOMEM;
- tmp_folio = folio_alloc(GFP_NOFS | __GFP_HIGHMEM, 0);
- if (!tmp_folio)
- goto out_unlock;
-
- /*
- * The page must not be redirtied until the writeout is completed
- * (i.e. userspace has sent a reply to the write request). Otherwise
- * there could be more than one temporary page instance for each real
- * page.
- *
- * This is ensured by holding the page lock in page_mkwrite() while
- * checking fuse_page_is_writeback(). We already hold the page lock
- * since clear_page_dirty_for_io() and keep it held until we add the
- * request to the fi->writepages list and increment ap->num_folios.
- * After this fuse_page_is_writeback() will indicate that the page is
- * under writeback, so we can release the page lock.
- */
if (data->wpa == NULL) {
err = -ENOMEM;
wpa = fuse_writepage_args_setup(folio, data->ff);
- if (!wpa) {
- folio_put(tmp_folio);
+ if (!wpa)
goto out_unlock;
- }
fuse_file_get(wpa->ia.ff);
data->max_folios = 1;
ap = &wpa->ia.ap;
}
folio_start_writeback(folio);
- fuse_writepage_args_page_fill(wpa, folio, tmp_folio, ap->num_folios);
- data->orig_folios[ap->num_folios] = folio;
+ fuse_writepage_args_page_fill(wpa, folio, ap->num_folios);
err = 0;
- if (data->wpa) {
- /*
- * Protected by fi->lock against concurrent access by
- * fuse_page_is_writeback().
- */
- spin_lock(&fi->lock);
- ap->num_folios++;
- spin_unlock(&fi->lock);
- } else if (fuse_writepage_add(wpa, folio)) {
+ ap->num_folios++;
+ if (!data->wpa)
data->wpa = wpa;
- } else {
- folio_end_writeback(folio);
- }
out_unlock:
folio_unlock(folio);
@@ -2441,13 +2154,6 @@ static int fuse_writepages(struct address_space *mapping,
data.wpa = NULL;
data.ff = NULL;
- err = -ENOMEM;
- data.orig_folios = kcalloc(fc->max_pages,
- sizeof(struct folio *),
- GFP_NOFS);
- if (!data.orig_folios)
- goto out;
-
err = write_cache_pages(mapping, wbc, fuse_writepages_fill, &data);
if (data.wpa) {
WARN_ON(!data.wpa->ia.ap.num_folios);
@@ -2456,7 +2162,6 @@ static int fuse_writepages(struct address_space *mapping,
if (data.ff)
fuse_file_put(data.ff, false);
- kfree(data.orig_folios);
out:
return err;
}
@@ -2481,8 +2186,6 @@ static int fuse_write_begin(struct file *file, struct address_space *mapping,
if (IS_ERR(folio))
goto error;
- fuse_wait_on_page_writeback(mapping->host, folio->index);
-
if (folio_test_uptodate(folio) || len >= folio_size(folio))
goto success;
/*
@@ -2545,13 +2248,9 @@ static int fuse_launder_folio(struct folio *folio)
{
int err = 0;
if (folio_clear_dirty_for_io(folio)) {
- struct inode *inode = folio->mapping->host;
-
- /* Serialize with pending writeback for the same page */
- fuse_wait_on_page_writeback(inode, folio->index);
err = fuse_writepage_locked(folio);
if (!err)
- fuse_wait_on_page_writeback(inode, folio->index);
+ folio_wait_writeback(folio);
}
return err;
}
@@ -2595,7 +2294,7 @@ static vm_fault_t fuse_page_mkwrite(struct vm_fault *vmf)
return VM_FAULT_NOPAGE;
}
- fuse_wait_on_folio_writeback(inode, folio);
+ folio_wait_writeback(folio);
return VM_FAULT_LOCKED;
}
@@ -3413,9 +3112,12 @@ static const struct address_space_operations fuse_file_aops = {
void fuse_init_file_inode(struct inode *inode, unsigned int flags)
{
struct fuse_inode *fi = get_fuse_inode(inode);
+ struct fuse_conn *fc = get_fuse_conn(inode);
inode->i_fop = &fuse_file_operations;
inode->i_data.a_ops = &fuse_file_aops;
+ if (fc->writeback_cache)
+ mapping_set_writeback_indeterminate(&inode->i_data);
INIT_LIST_HEAD(&fi->write_files);
INIT_LIST_HEAD(&fi->queued_writes);
@@ -3423,7 +3125,6 @@ void fuse_init_file_inode(struct inode *inode, unsigned int flags)
fi->iocachectr = 0;
init_waitqueue_head(&fi->page_waitq);
init_waitqueue_head(&fi->direct_io_waitq);
- fi->writepages = RB_ROOT;
if (IS_ENABLED(CONFIG_FUSE_DAX))
fuse_dax_inode_init(inode, flags);
--
2.43.5
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v5 1/5] mm: add AS_WRITEBACK_INDETERMINATE mapping flag
2024-11-15 22:44 ` [PATCH v5 1/5] mm: add AS_WRITEBACK_INDETERMINATE mapping flag Joanne Koong
@ 2024-11-15 23:11 ` Shakeel Butt
0 siblings, 0 replies; 15+ messages in thread
From: Shakeel Butt @ 2024-11-15 23:11 UTC (permalink / raw)
To: Joanne Koong
Cc: miklos, linux-fsdevel, jefflexu, josef, linux-mm, bernd.schubert,
kernel-team
On Fri, Nov 15, 2024 at 02:44:55PM -0800, Joanne Koong wrote:
> Add a new mapping flag AS_WRITEBACK_INDETERMINATE which filesystems may
> set to indicate that writing back to disk may take an indeterminate
> amount of time to complete. Extra caution should be taken when waiting
> on writeback for folios belonging to mappings where this flag is set.
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
Indeterminate is definitely different, ok with me.
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v5 4/5] mm/migrate: skip migrating folios under writeback with AS_WRITEBACK_INDETERMINATE mappings
2024-11-15 22:44 ` [PATCH v5 4/5] mm/migrate: skip migrating folios under writeback with " Joanne Koong
@ 2024-11-15 23:12 ` Shakeel Butt
0 siblings, 0 replies; 15+ messages in thread
From: Shakeel Butt @ 2024-11-15 23:12 UTC (permalink / raw)
To: Joanne Koong
Cc: miklos, linux-fsdevel, jefflexu, josef, linux-mm, bernd.schubert,
kernel-team
On Fri, Nov 15, 2024 at 02:44:58PM -0800, Joanne Koong wrote:
> For migrations called in MIGRATE_SYNC mode, skip migrating the folio if
> it is under writeback and has the AS_WRITEBACK_INDETERMINATE flag set on its
> mapping. If the AS_WRITEBACK_INDETERMINATE flag is set on the mapping, the
> writeback may take an indeterminate amount of time to complete, and
> waits may get stuck.
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v5 5/5] fuse: remove tmp folio for writebacks and internal rb tree
2024-11-15 22:44 ` [PATCH v5 5/5] fuse: remove tmp folio for writebacks and internal rb tree Joanne Koong
@ 2024-11-19 7:59 ` Jingbo Xu
2024-11-20 21:07 ` Joanne Koong
2024-11-20 9:56 ` Jingbo Xu
1 sibling, 1 reply; 15+ messages in thread
From: Jingbo Xu @ 2024-11-19 7:59 UTC (permalink / raw)
To: Joanne Koong, miklos, linux-fsdevel
Cc: shakeel.butt, josef, linux-mm, bernd.schubert, kernel-team
On 11/16/24 6:44 AM, Joanne Koong wrote:
> @@ -1838,7 +1748,7 @@ static void fuse_writepage_finish_stat(struct inode *inode, struct folio *folio)
> struct backing_dev_info *bdi = inode_to_bdi(inode);
>
> dec_wb_stat(&bdi->wb, WB_WRITEBACK);
> - node_stat_sub_folio(folio, NR_WRITEBACK_TEMP);
> + node_stat_sub_folio(folio, NR_WRITEBACK);
Now fuse_writepage_finish_stat() has only one caller and we could make
it embedded into its only caller.
> static void fuse_writepage_args_page_fill(struct fuse_writepage_args *wpa, struct folio *folio,
> - struct folio *tmp_folio, uint32_t folio_index)
> + uint32_t folio_index)
> {
> struct inode *inode = folio->mapping->host;
> struct fuse_args_pages *ap = &wpa->ia.ap;
>
> - folio_copy(tmp_folio, folio);
> -
> - ap->folios[folio_index] = tmp_folio;
> + ap->folios[folio_index] = folio;
> ap->descs[folio_index].offset = 0;
> ap->descs[folio_index].length = PAGE_SIZE;
>
> inc_wb_stat(&inode_to_bdi(inode)->wb, WB_WRITEBACK);
> - node_stat_add_folio(tmp_folio, NR_WRITEBACK_TEMP);
> + node_stat_add_folio(folio, NR_WRITEBACK);
This inc NR_WRITEBACK counter along with the corresponding dec
NR_WRITEBACK counter in fuse_writepage_finish_stat() seems unnecessary,
as folio_start_writeback() will increase the NR_WRITEBACK counter, while
folio_end_writeback() will decrease the NR_WRITEBACK counter.
--
Thanks,
Jingbo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v5 5/5] fuse: remove tmp folio for writebacks and internal rb tree
2024-11-15 22:44 ` [PATCH v5 5/5] fuse: remove tmp folio for writebacks and internal rb tree Joanne Koong
2024-11-19 7:59 ` Jingbo Xu
@ 2024-11-20 9:56 ` Jingbo Xu
2024-11-20 21:53 ` Joanne Koong
1 sibling, 1 reply; 15+ messages in thread
From: Jingbo Xu @ 2024-11-20 9:56 UTC (permalink / raw)
To: Joanne Koong, miklos, linux-fsdevel
Cc: shakeel.butt, josef, linux-mm, bernd.schubert, kernel-team
On 11/16/24 6:44 AM, Joanne Koong wrote:
> In the current FUSE writeback design (see commit 3be5a52b30aa
> ("fuse: support writable mmap")), a temp page is allocated for every
> dirty page to be written back, the contents of the dirty page are copied over
> to the temp page, and the temp page gets handed to the server to write back.
>
> This is done so that writeback may be immediately cleared on the dirty page,
> and this in turn is done for two reasons:
> a) in order to mitigate the following deadlock scenario that may arise
> if reclaim waits on writeback on the dirty page to complete:
> * single-threaded FUSE server is in the middle of handling a request
> that needs a memory allocation
> * memory allocation triggers direct reclaim
> * direct reclaim waits on a folio under writeback
> * the FUSE server can't write back the folio since it's stuck in
> direct reclaim
> b) in order to unblock internal (eg sync, page compaction) waits on
> writeback without needing the server to complete writing back to disk,
> which may take an indeterminate amount of time.
>
> With a recent change that added AS_WRITEBACK_INDETERMINATE and mitigates
> the situations described above, FUSE writeback does not need to use
> temp pages if it sets AS_WRITEBACK_INDETERMINATE on its inode mappings.
>
> This commit sets AS_WRITEBACK_INDETERMINATE on the inode mappings
> and removes the temporary pages + extra copying and the internal rb
> tree.
>
> fio benchmarks --
> (using averages observed from 10 runs, throwing away outliers)
>
> Setup:
> sudo mount -t tmpfs -o size=30G tmpfs ~/tmp_mount
> ./libfuse/build/example/passthrough_ll -o writeback -o max_threads=4 -o source=~/tmp_mount ~/fuse_mount
>
> fio --name=writeback --ioengine=sync --rw=write --bs={1k,4k,1M} --size=2G
> --numjobs=2 --ramp_time=30 --group_reporting=1 --directory=/root/fuse_mount
>
> bs = 1k 4k 1M
> Before 351 MiB/s 1818 MiB/s 1851 MiB/s
> After 341 MiB/s 2246 MiB/s 2685 MiB/s
> % diff -3% 23% 45%
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> ---
> fs/fuse/file.c | 339 +++----------------------------------------------
> 1 file changed, 20 insertions(+), 319 deletions(-)
>
> diff --git a/fs/fuse/file.c b/fs/fuse/file.c
> index 88d0946b5bc9..56289ac58596 100644
> --- a/fs/fuse/file.c
> +++ b/fs/fuse/file.c
> @@ -415,89 +415,11 @@ u64 fuse_lock_owner_id(struct fuse_conn *fc, fl_owner_t id)
>
> struct fuse_writepage_args {
> struct fuse_io_args ia;
> - struct rb_node writepages_entry;
> struct list_head queue_entry;
> - struct fuse_writepage_args *next;
> struct inode *inode;
> struct fuse_sync_bucket *bucket;
> };
>
> -static struct fuse_writepage_args *fuse_find_writeback(struct fuse_inode *fi,
> - pgoff_t idx_from, pgoff_t idx_to)
> -{
> - struct rb_node *n;
> -
> - n = fi->writepages.rb_node;
> -
> - while (n) {
> - struct fuse_writepage_args *wpa;
> - pgoff_t curr_index;
> -
> - wpa = rb_entry(n, struct fuse_writepage_args, writepages_entry);
> - WARN_ON(get_fuse_inode(wpa->inode) != fi);
> - curr_index = wpa->ia.write.in.offset >> PAGE_SHIFT;
> - if (idx_from >= curr_index + wpa->ia.ap.num_folios)
> - n = n->rb_right;
> - else if (idx_to < curr_index)
> - n = n->rb_left;
> - else
> - return wpa;
> - }
> - return NULL;
> -}
> -
> -/*
> - * Check if any page in a range is under writeback
> - */
> -static bool fuse_range_is_writeback(struct inode *inode, pgoff_t idx_from,
> - pgoff_t idx_to)
> -{
> - struct fuse_inode *fi = get_fuse_inode(inode);
> - bool found;
> -
> - if (RB_EMPTY_ROOT(&fi->writepages))
> - return false;
> -
> - spin_lock(&fi->lock);
> - found = fuse_find_writeback(fi, idx_from, idx_to);
> - spin_unlock(&fi->lock);
> -
> - return found;
> -}
> -
> -static inline bool fuse_page_is_writeback(struct inode *inode, pgoff_t index)
> -{
> - return fuse_range_is_writeback(inode, index, index);
> -}
> -
> -/*
> - * Wait for page writeback to be completed.
> - *
> - * Since fuse doesn't rely on the VM writeback tracking, this has to
> - * use some other means.
> - */
> -static void fuse_wait_on_page_writeback(struct inode *inode, pgoff_t index)
> -{
> - struct fuse_inode *fi = get_fuse_inode(inode);
> -
> - wait_event(fi->page_waitq, !fuse_page_is_writeback(inode, index));
> -}
> -
> -static inline bool fuse_folio_is_writeback(struct inode *inode,
> - struct folio *folio)
> -{
> - pgoff_t last = folio_next_index(folio) - 1;
> - return fuse_range_is_writeback(inode, folio_index(folio), last);
> -}
> -
> -static void fuse_wait_on_folio_writeback(struct inode *inode,
> - struct folio *folio)
> -{
> - struct fuse_inode *fi = get_fuse_inode(inode);
> -
> - wait_event(fi->page_waitq, !fuse_folio_is_writeback(inode, folio));
> -}
> -
> /*
> * Wait for all pending writepages on the inode to finish.
> *
> @@ -886,13 +808,6 @@ static int fuse_do_readfolio(struct file *file, struct folio *folio)
> ssize_t res;
> u64 attr_ver;
>
> - /*
> - * With the temporary pages that are used to complete writeback, we can
> - * have writeback that extends beyond the lifetime of the folio. So
> - * make sure we read a properly synced folio.
> - */
> - fuse_wait_on_folio_writeback(inode, folio);
> -
> attr_ver = fuse_get_attr_version(fm->fc);
>
> /* Don't overflow end offset */
> @@ -1003,17 +918,12 @@ static void fuse_send_readpages(struct fuse_io_args *ia, struct file *file)
> static void fuse_readahead(struct readahead_control *rac)
> {
> struct inode *inode = rac->mapping->host;
> - struct fuse_inode *fi = get_fuse_inode(inode);
> struct fuse_conn *fc = get_fuse_conn(inode);
> unsigned int max_pages, nr_pages;
> - pgoff_t first = readahead_index(rac);
> - pgoff_t last = first + readahead_count(rac) - 1;
>
> if (fuse_is_bad(inode))
> return;
>
> - wait_event(fi->page_waitq, !fuse_range_is_writeback(inode, first, last));
> -
> max_pages = min_t(unsigned int, fc->max_pages,
> fc->max_read / PAGE_SIZE);
>
> @@ -1172,7 +1082,7 @@ static ssize_t fuse_send_write_pages(struct fuse_io_args *ia,
> int err;
>
> for (i = 0; i < ap->num_folios; i++)
> - fuse_wait_on_folio_writeback(inode, ap->folios[i]);
> + folio_wait_writeback(ap->folios[i]);
>
> fuse_write_args_fill(ia, ff, pos, count);
> ia->write.in.flags = fuse_write_flags(iocb);
> @@ -1622,7 +1532,7 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter,
> return res;
> }
> }
> - if (!cuse && fuse_range_is_writeback(inode, idx_from, idx_to)) {
> + if (!cuse && filemap_range_has_writeback(mapping, pos, (pos + count - 1))) {
> if (!write)
> inode_lock(inode);
> fuse_sync_writes(inode);
> @@ -1825,7 +1735,7 @@ static void fuse_writepage_free(struct fuse_writepage_args *wpa)
> fuse_sync_bucket_dec(wpa->bucket);
>
> for (i = 0; i < ap->num_folios; i++)
> - folio_put(ap->folios[i]);
> + folio_end_writeback(ap->folios[i]);
I noticed that if we folio_end_writeback() in fuse_writepage_finish()
(rather than fuse_writepage_free()), there's ~50% buffer write
bandwridth performance gain (5500MB -> 8500MB)[*]
The fuse server is generally implemented in multi-thread style, and
multi (fuse server) worker threads could fetch and process FUSE_WRITE
requests of one fuse inode. Then there's serious lock contention for
the xarray lock (of the address space) when these multi worker threads
call fuse_writepage_end->folio_end_writeback when they are sending
replies of FUSE_WRITE requests.
The lock contention is greatly alleviated when folio_end_writeback() is
serialized with fi->lock. IOWs in the current implementation
(folio_end_writeback() in fuse_writepage_free()), each worker thread
needs to compete for the xarray lock for 256 times (one fuse request can
contain at most 256 pages if FUSE_MAX_MAX_PAGES is 256) when completing
a FUSE_WRITE request.
After moving folio_end_writeback() to fuse_writepage_finish(), each
worker thread needs to compete for fi->lock only once. IOWs the locking
granularity is larger now.
> @@ -2367,54 +2111,23 @@ static int fuse_writepages_fill(struct folio *folio,
> data->wpa = NULL;
> }
>
> - err = -ENOMEM;
> - tmp_folio = folio_alloc(GFP_NOFS | __GFP_HIGHMEM, 0);
> - if (!tmp_folio)
> - goto out_unlock;
> -
> - /*
> - * The page must not be redirtied until the writeout is completed
> - * (i.e. userspace has sent a reply to the write request). Otherwise
> - * there could be more than one temporary page instance for each real
> - * page.
> - *
> - * This is ensured by holding the page lock in page_mkwrite() while
> - * checking fuse_page_is_writeback(). We already hold the page lock
> - * since clear_page_dirty_for_io() and keep it held until we add the
> - * request to the fi->writepages list and increment ap->num_folios.
> - * After this fuse_page_is_writeback() will indicate that the page is
> - * under writeback, so we can release the page lock.
> - */
> if (data->wpa == NULL) {
> err = -ENOMEM;
> wpa = fuse_writepage_args_setup(folio, data->ff);
> - if (!wpa) {
> - folio_put(tmp_folio);
> + if (!wpa)
> goto out_unlock;
> - }
> fuse_file_get(wpa->ia.ff);
> data->max_folios = 1;
> ap = &wpa->ia.ap;
> }
> folio_start_writeback(folio);
There's also a lock contention for the xarray lock when calling
folio_start_writeback().
I also noticed a strange thing that, if we lock fi->lock and unlock
immediately, the write bandwidth improves by 5% (8500MB -> 9000MB). The
palce where to insert the "locking fi->lock and unlocking" actually
doesn't matter. "perf lock contention" shows the lock contention for
the xarray lock is greatly alleviated, though I can't understand how it
is done quite well...
As the performance gain is not significant (~5%), I think we can leave
this stange phenomenon aside for now.
[*] test case:
./passthrough_hp --bypass-rw 2 /tmp /mnt
(testbench mode in
https://github.com/libfuse/libfuse/pull/807/commits/e83789cc6e83ca42ccc9899c4f7f8c69f31cbff9
bypass the buffer copy along with the persistence procedure)
fio -fallocate=0 -numjobs=32 -iodepth=1 -ioengine=sync -sync=0
--direct=0 -rw=write -bs=1M -size=100G --time_based --runtime=300
-directory=/mnt/ --group_reporting --name=Fio
--
Thanks,
Jingbo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v5 3/5] fs/writeback: in wait_sb_inodes(), skip wait for AS_WRITEBACK_INDETERMINATE mappings
2024-11-15 22:44 ` [PATCH v5 3/5] fs/writeback: in wait_sb_inodes(), skip wait for AS_WRITEBACK_INDETERMINATE mappings Joanne Koong
@ 2024-11-20 12:07 ` Jingbo Xu
0 siblings, 0 replies; 15+ messages in thread
From: Jingbo Xu @ 2024-11-20 12:07 UTC (permalink / raw)
To: Joanne Koong, miklos, linux-fsdevel
Cc: shakeel.butt, josef, linux-mm, bernd.schubert, kernel-team
On 11/16/24 6:44 AM, Joanne Koong wrote:
> For filesystems with the AS_WRITEBACK_INDETERMINATE flag set, writeback
> operations may take an indeterminate time to complete. For example, writing
> data back to disk in FUSE filesystems depends on the userspace server
> successfully completing writeback.
>
> In this commit, wait_sb_inodes() skips waiting on writeback if the
> inode's mapping has AS_WRITEBACK_INDETERMINATE set, else sync(2) may take an
> indeterminate amount of time to complete.
>
> If the caller wishes to ensure the data for a mapping with the
> AS_WRITEBACK_INDETERMINATE flag set has actually been written back to disk,
> they should use fsync(2)/fdatasync(2) instead.
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> ---
> fs/fs-writeback.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index d8bec3c1bb1f..ad192db17ce4 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -2659,6 +2659,9 @@ static void wait_sb_inodes(struct super_block *sb)
> if (!mapping_tagged(mapping, PAGECACHE_TAG_WRITEBACK))
> continue;
>
> + if (mapping_writeback_indeterminate(mapping))
> + continue;
> +
> spin_unlock_irq(&sb->s_inode_wblist_lock);
>
> spin_lock(&inode->i_lock);
Reviewed-by: Jingbo Xu <jefflexu@linux.alibaba.com>
--
Thanks,
Jingbo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v5 5/5] fuse: remove tmp folio for writebacks and internal rb tree
2024-11-19 7:59 ` Jingbo Xu
@ 2024-11-20 21:07 ` Joanne Koong
0 siblings, 0 replies; 15+ messages in thread
From: Joanne Koong @ 2024-11-20 21:07 UTC (permalink / raw)
To: Jingbo Xu
Cc: miklos, linux-fsdevel, shakeel.butt, josef, linux-mm,
bernd.schubert, kernel-team
On Mon, Nov 18, 2024 at 11:59 PM Jingbo Xu <jefflexu@linux.alibaba.com> wrote:
>
> On 11/16/24 6:44 AM, Joanne Koong wrote:
>
> > @@ -1838,7 +1748,7 @@ static void fuse_writepage_finish_stat(struct inode *inode, struct folio *folio)
> > struct backing_dev_info *bdi = inode_to_bdi(inode);
> >
> > dec_wb_stat(&bdi->wb, WB_WRITEBACK);
> > - node_stat_sub_folio(folio, NR_WRITEBACK_TEMP);
> > + node_stat_sub_folio(folio, NR_WRITEBACK);
>
> Now fuse_writepage_finish_stat() has only one caller and we could make
> it embedded into its only caller.
I'll make this change in v6.
>
>
> > static void fuse_writepage_args_page_fill(struct fuse_writepage_args *wpa, struct folio *folio,
> > - struct folio *tmp_folio, uint32_t folio_index)
> > + uint32_t folio_index)
> > {
> > struct inode *inode = folio->mapping->host;
> > struct fuse_args_pages *ap = &wpa->ia.ap;
> >
> > - folio_copy(tmp_folio, folio);
> > -
> > - ap->folios[folio_index] = tmp_folio;
> > + ap->folios[folio_index] = folio;
> > ap->descs[folio_index].offset = 0;
> > ap->descs[folio_index].length = PAGE_SIZE;
> >
> > inc_wb_stat(&inode_to_bdi(inode)->wb, WB_WRITEBACK);
> > - node_stat_add_folio(tmp_folio, NR_WRITEBACK_TEMP);
> > + node_stat_add_folio(folio, NR_WRITEBACK);
>
> This inc NR_WRITEBACK counter along with the corresponding dec
> NR_WRITEBACK counter in fuse_writepage_finish_stat() seems unnecessary,
> as folio_start_writeback() will increase the NR_WRITEBACK counter, while
> folio_end_writeback() will decrease the NR_WRITEBACK counter.
>
Nice find, I'll make this change in v6.
Thanks,
Joanne
>
> --
> Thanks,
> Jingbo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v5 5/5] fuse: remove tmp folio for writebacks and internal rb tree
2024-11-20 9:56 ` Jingbo Xu
@ 2024-11-20 21:53 ` Joanne Koong
2024-11-21 3:08 ` Jingbo Xu
0 siblings, 1 reply; 15+ messages in thread
From: Joanne Koong @ 2024-11-20 21:53 UTC (permalink / raw)
To: Jingbo Xu
Cc: miklos, linux-fsdevel, shakeel.butt, josef, linux-mm,
bernd.schubert, kernel-team
On Wed, Nov 20, 2024 at 1:56 AM Jingbo Xu <jefflexu@linux.alibaba.com> wrote:
>
> On 11/16/24 6:44 AM, Joanne Koong wrote:
> > In the current FUSE writeback design (see commit 3be5a52b30aa
> > ("fuse: support writable mmap")), a temp page is allocated for every
> > dirty page to be written back, the contents of the dirty page are copied over
> > to the temp page, and the temp page gets handed to the server to write back.
> >
> > This is done so that writeback may be immediately cleared on the dirty page,
> > and this in turn is done for two reasons:
> > a) in order to mitigate the following deadlock scenario that may arise
> > if reclaim waits on writeback on the dirty page to complete:
> > * single-threaded FUSE server is in the middle of handling a request
> > that needs a memory allocation
> > * memory allocation triggers direct reclaim
> > * direct reclaim waits on a folio under writeback
> > * the FUSE server can't write back the folio since it's stuck in
> > direct reclaim
> > b) in order to unblock internal (eg sync, page compaction) waits on
> > writeback without needing the server to complete writing back to disk,
> > which may take an indeterminate amount of time.
> >
> > With a recent change that added AS_WRITEBACK_INDETERMINATE and mitigates
> > the situations described above, FUSE writeback does not need to use
> > temp pages if it sets AS_WRITEBACK_INDETERMINATE on its inode mappings.
> >
> > This commit sets AS_WRITEBACK_INDETERMINATE on the inode mappings
> > and removes the temporary pages + extra copying and the internal rb
> > tree.
> >
> > fio benchmarks --
> > (using averages observed from 10 runs, throwing away outliers)
> >
> > Setup:
> > sudo mount -t tmpfs -o size=30G tmpfs ~/tmp_mount
> > ./libfuse/build/example/passthrough_ll -o writeback -o max_threads=4 -o source=~/tmp_mount ~/fuse_mount
> >
> > fio --name=writeback --ioengine=sync --rw=write --bs={1k,4k,1M} --size=2G
> > --numjobs=2 --ramp_time=30 --group_reporting=1 --directory=/root/fuse_mount
> >
> > bs = 1k 4k 1M
> > Before 351 MiB/s 1818 MiB/s 1851 MiB/s
> > After 341 MiB/s 2246 MiB/s 2685 MiB/s
> > % diff -3% 23% 45%
> >
> > Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> > ---
> > fs/fuse/file.c | 339 +++----------------------------------------------
> > 1 file changed, 20 insertions(+), 319 deletions(-)
> >
> > diff --git a/fs/fuse/file.c b/fs/fuse/file.c
> > index 88d0946b5bc9..56289ac58596 100644
> > --- a/fs/fuse/file.c
> > +++ b/fs/fuse/file.c
> > @@ -1172,7 +1082,7 @@ static ssize_t fuse_send_write_pages(struct fuse_io_args *ia,
> > int err;
> >
> > for (i = 0; i < ap->num_folios; i++)
> > - fuse_wait_on_folio_writeback(inode, ap->folios[i]);
> > + folio_wait_writeback(ap->folios[i]);
> >
> > fuse_write_args_fill(ia, ff, pos, count);
> > ia->write.in.flags = fuse_write_flags(iocb);
> > @@ -1622,7 +1532,7 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter,
> > return res;
> > }
> > }
> > - if (!cuse && fuse_range_is_writeback(inode, idx_from, idx_to)) {
> > + if (!cuse && filemap_range_has_writeback(mapping, pos, (pos + count - 1))) {
> > if (!write)
> > inode_lock(inode);
> > fuse_sync_writes(inode);
> > @@ -1825,7 +1735,7 @@ static void fuse_writepage_free(struct fuse_writepage_args *wpa)
> > fuse_sync_bucket_dec(wpa->bucket);
> >
> > for (i = 0; i < ap->num_folios; i++)
> > - folio_put(ap->folios[i]);
> > + folio_end_writeback(ap->folios[i]);
>
> I noticed that if we folio_end_writeback() in fuse_writepage_finish()
> (rather than fuse_writepage_free()), there's ~50% buffer write
> bandwridth performance gain (5500MB -> 8500MB)[*]
>
> The fuse server is generally implemented in multi-thread style, and
> multi (fuse server) worker threads could fetch and process FUSE_WRITE
> requests of one fuse inode. Then there's serious lock contention for
> the xarray lock (of the address space) when these multi worker threads
> call fuse_writepage_end->folio_end_writeback when they are sending
> replies of FUSE_WRITE requests.
>
> The lock contention is greatly alleviated when folio_end_writeback() is
> serialized with fi->lock. IOWs in the current implementation
> (folio_end_writeback() in fuse_writepage_free()), each worker thread
> needs to compete for the xarray lock for 256 times (one fuse request can
> contain at most 256 pages if FUSE_MAX_MAX_PAGES is 256) when completing
> a FUSE_WRITE request.
>
> After moving folio_end_writeback() to fuse_writepage_finish(), each
> worker thread needs to compete for fi->lock only once. IOWs the locking
> granularity is larger now.
>
Interesting! Thanks for sharing. Are you able to consistently repro
these results and on different machines? When I run it locally on my
machine using the commands you shared, I'm seeing roughly the same
throughput:
Current implementation (folio_end_writeback() in fuse_writepage_free()):
WRITE: bw=385MiB/s (404MB/s), 385MiB/s-385MiB/s (404MB/s-404MB/s),
io=113GiB (121GB), run=300177-300177msec
WRITE: bw=384MiB/s (403MB/s), 384MiB/s-384MiB/s (403MB/s-403MB/s),
io=113GiB (121GB), run=300178-300178msec
fuse_end_writeback() in fuse_writepage_finish():
WRITE: bw=387MiB/s (406MB/s), 387MiB/s-387MiB/s (406MB/s-406MB/s),
io=113GiB (122GB), run=300165-300165msec
WRITE: bw=381MiB/s (399MB/s), 381MiB/s-381MiB/s (399MB/s-399MB/s),
io=112GiB (120GB), run=300143-300143msec
I wonder if it's because your machine is so much faster that lock
contention makes a difference for you whereas on my machine there's
other things that slow it down before lock contention comes into play.
I see your point about why it would make sense that having
folio_end_writeback() in fuse_writepage_finish() inside the scope of
the fi->lock could make it faster, but I also could see how having it
outside the lock could make it faster as well. I'm thinking about the
scenario where if there's 8 threads all executing
fuse_send_writepage() at the same time, calling folio_end_writeback()
outside the fi->lock would unblock other threads trying to get the
fi->lock and that other thread could execute while
folio_end_writeback() gets executed.
Looking at it some more, it seems like it'd be useful if there was
some equivalent api to folio_end_writeback() that takes in an array of
folios and would only need to grab the xarray lock once to clear
writeback on all the folios in the array.
When fuse supports large folios [*] this will help lock contention on
the xarray lock as well because there'll be less folio_end_writeback()
calls.
I'm happy to move the fuse_end_writeback() call to
fuse_writepage_finish() considering what you're seeing. 5500 Mb ->
8800 Mb is a huge perf improvement!
[*] https://lore.kernel.org/linux-fsdevel/20241109001258.2216604-1-joannelkoong@gmail.com/
>
>
> > @@ -2367,54 +2111,23 @@ static int fuse_writepages_fill(struct folio *folio,
> > data->wpa = NULL;
> > }
> >
> > - err = -ENOMEM;
> > - tmp_folio = folio_alloc(GFP_NOFS | __GFP_HIGHMEM, 0);
> > - if (!tmp_folio)
> > - goto out_unlock;
> > -
> > - /*
> > - * The page must not be redirtied until the writeout is completed
> > - * (i.e. userspace has sent a reply to the write request). Otherwise
> > - * there could be more than one temporary page instance for each real
> > - * page.
> > - *
> > - * This is ensured by holding the page lock in page_mkwrite() while
> > - * checking fuse_page_is_writeback(). We already hold the page lock
> > - * since clear_page_dirty_for_io() and keep it held until we add the
> > - * request to the fi->writepages list and increment ap->num_folios.
> > - * After this fuse_page_is_writeback() will indicate that the page is
> > - * under writeback, so we can release the page lock.
> > - */
> > if (data->wpa == NULL) {
> > err = -ENOMEM;
> > wpa = fuse_writepage_args_setup(folio, data->ff);
> > - if (!wpa) {
> > - folio_put(tmp_folio);
> > + if (!wpa)
> > goto out_unlock;
> > - }
> > fuse_file_get(wpa->ia.ff);
> > data->max_folios = 1;
> > ap = &wpa->ia.ap;
> > }
> > folio_start_writeback(folio);
>
> There's also a lock contention for the xarray lock when calling
> folio_start_writeback().
>
> I also noticed a strange thing that, if we lock fi->lock and unlock
> immediately, the write bandwidth improves by 5% (8500MB -> 9000MB). The
Interesting! By lock fi->lock and unlock immediately, do you mean
locking it, then unlocking it, then calling folio_start_writeback() or
locking it, calling folio_start_writeback() and then unlocking it?
Thanks,
Joanne
> palce where to insert the "locking fi->lock and unlocking" actually
> doesn't matter. "perf lock contention" shows the lock contention for
> the xarray lock is greatly alleviated, though I can't understand how it
> is done quite well...
>
> As the performance gain is not significant (~5%), I think we can leave
> this stange phenomenon aside for now.
>
>
>
> [*] test case:
> ./passthrough_hp --bypass-rw 2 /tmp /mnt
> (testbench mode in
> https://github.com/libfuse/libfuse/pull/807/commits/e83789cc6e83ca42ccc9899c4f7f8c69f31cbff9
> bypass the buffer copy along with the persistence procedure)
>
> fio -fallocate=0 -numjobs=32 -iodepth=1 -ioengine=sync -sync=0
> --direct=0 -rw=write -bs=1M -size=100G --time_based --runtime=300
> -directory=/mnt/ --group_reporting --name=Fio
> --
> Thanks,
> Jingbo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v5 5/5] fuse: remove tmp folio for writebacks and internal rb tree
2024-11-20 21:53 ` Joanne Koong
@ 2024-11-21 3:08 ` Jingbo Xu
2024-11-21 10:11 ` Bernd Schubert
0 siblings, 1 reply; 15+ messages in thread
From: Jingbo Xu @ 2024-11-21 3:08 UTC (permalink / raw)
To: Joanne Koong
Cc: miklos, linux-fsdevel, shakeel.butt, josef, linux-mm,
bernd.schubert, kernel-team
On 11/21/24 5:53 AM, Joanne Koong wrote:
> On Wed, Nov 20, 2024 at 1:56 AM Jingbo Xu <jefflexu@linux.alibaba.com> wrote:
>>
>> On 11/16/24 6:44 AM, Joanne Koong wrote:
>>> In the current FUSE writeback design (see commit 3be5a52b30aa
>>> ("fuse: support writable mmap")), a temp page is allocated for every
>>> dirty page to be written back, the contents of the dirty page are copied over
>>> to the temp page, and the temp page gets handed to the server to write back.
>>>
>>> This is done so that writeback may be immediately cleared on the dirty page,
>>> and this in turn is done for two reasons:
>>> a) in order to mitigate the following deadlock scenario that may arise
>>> if reclaim waits on writeback on the dirty page to complete:
>>> * single-threaded FUSE server is in the middle of handling a request
>>> that needs a memory allocation
>>> * memory allocation triggers direct reclaim
>>> * direct reclaim waits on a folio under writeback
>>> * the FUSE server can't write back the folio since it's stuck in
>>> direct reclaim
>>> b) in order to unblock internal (eg sync, page compaction) waits on
>>> writeback without needing the server to complete writing back to disk,
>>> which may take an indeterminate amount of time.
>>>
>>> With a recent change that added AS_WRITEBACK_INDETERMINATE and mitigates
>>> the situations described above, FUSE writeback does not need to use
>>> temp pages if it sets AS_WRITEBACK_INDETERMINATE on its inode mappings.
>>>
>>> This commit sets AS_WRITEBACK_INDETERMINATE on the inode mappings
>>> and removes the temporary pages + extra copying and the internal rb
>>> tree.
>>>
>>> fio benchmarks --
>>> (using averages observed from 10 runs, throwing away outliers)
>>>
>>> Setup:
>>> sudo mount -t tmpfs -o size=30G tmpfs ~/tmp_mount
>>> ./libfuse/build/example/passthrough_ll -o writeback -o max_threads=4 -o source=~/tmp_mount ~/fuse_mount
>>>
>>> fio --name=writeback --ioengine=sync --rw=write --bs={1k,4k,1M} --size=2G
>>> --numjobs=2 --ramp_time=30 --group_reporting=1 --directory=/root/fuse_mount
>>>
>>> bs = 1k 4k 1M
>>> Before 351 MiB/s 1818 MiB/s 1851 MiB/s
>>> After 341 MiB/s 2246 MiB/s 2685 MiB/s
>>> % diff -3% 23% 45%
>>>
>>> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
>>> ---
>>> fs/fuse/file.c | 339 +++----------------------------------------------
>>> 1 file changed, 20 insertions(+), 319 deletions(-)
>>>
>>> diff --git a/fs/fuse/file.c b/fs/fuse/file.c
>>> index 88d0946b5bc9..56289ac58596 100644
>>> --- a/fs/fuse/file.c
>>> +++ b/fs/fuse/file.c
>>> @@ -1172,7 +1082,7 @@ static ssize_t fuse_send_write_pages(struct fuse_io_args *ia,
>>> int err;
>>>
>>> for (i = 0; i < ap->num_folios; i++)
>>> - fuse_wait_on_folio_writeback(inode, ap->folios[i]);
>>> + folio_wait_writeback(ap->folios[i]);
>>>
>>> fuse_write_args_fill(ia, ff, pos, count);
>>> ia->write.in.flags = fuse_write_flags(iocb);
>>> @@ -1622,7 +1532,7 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter,
>>> return res;
>>> }
>>> }
>>> - if (!cuse && fuse_range_is_writeback(inode, idx_from, idx_to)) {
>>> + if (!cuse && filemap_range_has_writeback(mapping, pos, (pos + count - 1))) {
>>> if (!write)
>>> inode_lock(inode);
>>> fuse_sync_writes(inode);
>>> @@ -1825,7 +1735,7 @@ static void fuse_writepage_free(struct fuse_writepage_args *wpa)
>>> fuse_sync_bucket_dec(wpa->bucket);
>>>
>>> for (i = 0; i < ap->num_folios; i++)
>>> - folio_put(ap->folios[i]);
>>> + folio_end_writeback(ap->folios[i]);
>>
>> I noticed that if we folio_end_writeback() in fuse_writepage_finish()
>> (rather than fuse_writepage_free()), there's ~50% buffer write
>> bandwridth performance gain (5500MB -> 8500MB)[*]
>>
>> The fuse server is generally implemented in multi-thread style, and
>> multi (fuse server) worker threads could fetch and process FUSE_WRITE
>> requests of one fuse inode. Then there's serious lock contention for
>> the xarray lock (of the address space) when these multi worker threads
>> call fuse_writepage_end->folio_end_writeback when they are sending
>> replies of FUSE_WRITE requests.
>>
>> The lock contention is greatly alleviated when folio_end_writeback() is
>> serialized with fi->lock. IOWs in the current implementation
>> (folio_end_writeback() in fuse_writepage_free()), each worker thread
>> needs to compete for the xarray lock for 256 times (one fuse request can
>> contain at most 256 pages if FUSE_MAX_MAX_PAGES is 256) when completing
>> a FUSE_WRITE request.
>>
>> After moving folio_end_writeback() to fuse_writepage_finish(), each
>> worker thread needs to compete for fi->lock only once. IOWs the locking
>> granularity is larger now.
>>
>
> Interesting! Thanks for sharing. Are you able to consistently repro
> these results and on different machines? When I run it locally on my
> machine using the commands you shared, I'm seeing roughly the same
> throughput:
>
> Current implementation (folio_end_writeback() in fuse_writepage_free()):
> WRITE: bw=385MiB/s (404MB/s), 385MiB/s-385MiB/s (404MB/s-404MB/s),
> io=113GiB (121GB), run=300177-300177msec
> WRITE: bw=384MiB/s (403MB/s), 384MiB/s-384MiB/s (403MB/s-403MB/s),
> io=113GiB (121GB), run=300178-300178msec
>
> fuse_end_writeback() in fuse_writepage_finish():
> WRITE: bw=387MiB/s (406MB/s), 387MiB/s-387MiB/s (406MB/s-406MB/s),
> io=113GiB (122GB), run=300165-300165msec
> WRITE: bw=381MiB/s (399MB/s), 381MiB/s-381MiB/s (399MB/s-399MB/s),
> io=112GiB (120GB), run=300143-300143msec
>
> I wonder if it's because your machine is so much faster that lock
> contention makes a difference for you whereas on my machine there's
> other things that slow it down before lock contention comes into play.
Yeah, I agree that the lock contention matters only when the writeback
kworker consumes 100% CPU, i.e. when the writeback kworker is the
bottleneck. To expose that, the passthrough_hp daemon works in
benchmark[*] mode (I noticed that passthrough_hp can be the bottleneck
when disabling "--bypass-rw" mode).
[*]
https://github.com/libfuse/libfuse/pull/807/commits/e83789cc6e83ca42ccc9899c4f7f8c69f31cbff9
>
> I see your point about why it would make sense that having
> folio_end_writeback() in fuse_writepage_finish() inside the scope of
> the fi->lock could make it faster, but I also could see how having it
> outside the lock could make it faster as well. I'm thinking about the
> scenario where if there's 8 threads all executing
> fuse_send_writepage() at the same time, calling folio_end_writeback()
> outside the fi->lock would unblock other threads trying to get the
> fi->lock and that other thread could execute while
> folio_end_writeback() gets executed.
>
> Looking at it some more, it seems like it'd be useful if there was
> some equivalent api to folio_end_writeback() that takes in an array of
> folios and would only need to grab the xarray lock once to clear
> writeback on all the folios in the array.
Yes it's exactly what we need.
>
> When fuse supports large folios [*] this will help lock contention on
> the xarray lock as well because there'll be less folio_end_writeback()
> calls.
Cool, it definitely helps.
>
> I'm happy to move the fuse_end_writeback() call to
> fuse_writepage_finish() considering what you're seeing. 5500 Mb ->
> 8800 Mb is a huge perf improvement!
This statistics is tested in benchmark ("--bypass-rw") mode. When
disabling "--bypass-rw" mode and testing fuse passthrough_hp over a ext4
over nvme, the performance gain is ~10% (4009MB/s ->4428MB/s).
>>> @@ -2367,54 +2111,23 @@ static int fuse_writepages_fill(struct folio *folio,
>>> data->wpa = NULL;
>>> }
>>>
>>> - err = -ENOMEM;
>>> - tmp_folio = folio_alloc(GFP_NOFS | __GFP_HIGHMEM, 0);
>>> - if (!tmp_folio)
>>> - goto out_unlock;
>>> -
>>> - /*
>>> - * The page must not be redirtied until the writeout is completed
>>> - * (i.e. userspace has sent a reply to the write request). Otherwise
>>> - * there could be more than one temporary page instance for each real
>>> - * page.
>>> - *
>>> - * This is ensured by holding the page lock in page_mkwrite() while
>>> - * checking fuse_page_is_writeback(). We already hold the page lock
>>> - * since clear_page_dirty_for_io() and keep it held until we add the
>>> - * request to the fi->writepages list and increment ap->num_folios.
>>> - * After this fuse_page_is_writeback() will indicate that the page is
>>> - * under writeback, so we can release the page lock.
>>> - */
>>> if (data->wpa == NULL) {
>>> err = -ENOMEM;
>>> wpa = fuse_writepage_args_setup(folio, data->ff);
>>> - if (!wpa) {
>>> - folio_put(tmp_folio);
>>> + if (!wpa)
>>> goto out_unlock;
>>> - }
>>> fuse_file_get(wpa->ia.ff);
>>> data->max_folios = 1;
>>> ap = &wpa->ia.ap;
>>> }
>>> folio_start_writeback(folio);
>>
>> There's also a lock contention for the xarray lock when calling
>> folio_start_writeback().
>>
>> I also noticed a strange thing that, if we lock fi->lock and unlock
>> immediately, the write bandwidth improves by 5% (8500MB -> 9000MB). The
>
> Interesting! By lock fi->lock and unlock immediately, do you mean
> locking it, then unlocking it, then calling folio_start_writeback() or
> locking it, calling folio_start_writeback() and then unlocking it?
Either way works, as long as we lock/unlock fi->lock in
fuse_writepages_fill()... The lock contention is further alleviated
when folio_start_writeback() is inside the critical area of fi->lock.
--
Thanks,
Jingbo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v5 5/5] fuse: remove tmp folio for writebacks and internal rb tree
2024-11-21 3:08 ` Jingbo Xu
@ 2024-11-21 10:11 ` Bernd Schubert
0 siblings, 0 replies; 15+ messages in thread
From: Bernd Schubert @ 2024-11-21 10:11 UTC (permalink / raw)
To: Jingbo Xu, Joanne Koong
Cc: miklos, linux-fsdevel, shakeel.butt, josef, linux-mm, kernel-team
On 11/21/24 04:08, Jingbo Xu wrote:
>
>
> On 11/21/24 5:53 AM, Joanne Koong wrote:
>> On Wed, Nov 20, 2024 at 1:56 AM Jingbo Xu <jefflexu@linux.alibaba.com> wrote:
>>>
>>> On 11/16/24 6:44 AM, Joanne Koong wrote:
>>>> In the current FUSE writeback design (see commit 3be5a52b30aa
>>>> ("fuse: support writable mmap")), a temp page is allocated for every
>>>> dirty page to be written back, the contents of the dirty page are copied over
>>>> to the temp page, and the temp page gets handed to the server to write back.
>>>>
>>>> This is done so that writeback may be immediately cleared on the dirty page,
>>>> and this in turn is done for two reasons:
>>>> a) in order to mitigate the following deadlock scenario that may arise
>>>> if reclaim waits on writeback on the dirty page to complete:
>>>> * single-threaded FUSE server is in the middle of handling a request
>>>> that needs a memory allocation
>>>> * memory allocation triggers direct reclaim
>>>> * direct reclaim waits on a folio under writeback
>>>> * the FUSE server can't write back the folio since it's stuck in
>>>> direct reclaim
>>>> b) in order to unblock internal (eg sync, page compaction) waits on
>>>> writeback without needing the server to complete writing back to disk,
>>>> which may take an indeterminate amount of time.
>>>>
>>>> With a recent change that added AS_WRITEBACK_INDETERMINATE and mitigates
>>>> the situations described above, FUSE writeback does not need to use
>>>> temp pages if it sets AS_WRITEBACK_INDETERMINATE on its inode mappings.
>>>>
>>>> This commit sets AS_WRITEBACK_INDETERMINATE on the inode mappings
>>>> and removes the temporary pages + extra copying and the internal rb
>>>> tree.
>>>>
>>>> fio benchmarks --
>>>> (using averages observed from 10 runs, throwing away outliers)
>>>>
>>>> Setup:
>>>> sudo mount -t tmpfs -o size=30G tmpfs ~/tmp_mount
>>>> ./libfuse/build/example/passthrough_ll -o writeback -o max_threads=4 -o source=~/tmp_mount ~/fuse_mount
>>>>
>>>> fio --name=writeback --ioengine=sync --rw=write --bs={1k,4k,1M} --size=2G
>>>> --numjobs=2 --ramp_time=30 --group_reporting=1 --directory=/root/fuse_mount
>>>>
>>>> bs = 1k 4k 1M
>>>> Before 351 MiB/s 1818 MiB/s 1851 MiB/s
>>>> After 341 MiB/s 2246 MiB/s 2685 MiB/s
>>>> % diff -3% 23% 45%
>>>>
>>>> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
>>>> ---
>>>> fs/fuse/file.c | 339 +++----------------------------------------------
>>>> 1 file changed, 20 insertions(+), 319 deletions(-)
>>>>
>>>> diff --git a/fs/fuse/file.c b/fs/fuse/file.c
>>>> index 88d0946b5bc9..56289ac58596 100644
>>>> --- a/fs/fuse/file.c
>>>> +++ b/fs/fuse/file.c
>>>> @@ -1172,7 +1082,7 @@ static ssize_t fuse_send_write_pages(struct fuse_io_args *ia,
>>>> int err;
>>>>
>>>> for (i = 0; i < ap->num_folios; i++)
>>>> - fuse_wait_on_folio_writeback(inode, ap->folios[i]);
>>>> + folio_wait_writeback(ap->folios[i]);
>>>>
>>>> fuse_write_args_fill(ia, ff, pos, count);
>>>> ia->write.in.flags = fuse_write_flags(iocb);
>>>> @@ -1622,7 +1532,7 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter,
>>>> return res;
>>>> }
>>>> }
>>>> - if (!cuse && fuse_range_is_writeback(inode, idx_from, idx_to)) {
>>>> + if (!cuse && filemap_range_has_writeback(mapping, pos, (pos + count - 1))) {
>>>> if (!write)
>>>> inode_lock(inode);
>>>> fuse_sync_writes(inode);
>>>> @@ -1825,7 +1735,7 @@ static void fuse_writepage_free(struct fuse_writepage_args *wpa)
>>>> fuse_sync_bucket_dec(wpa->bucket);
>>>>
>>>> for (i = 0; i < ap->num_folios; i++)
>>>> - folio_put(ap->folios[i]);
>>>> + folio_end_writeback(ap->folios[i]);
>>>
>>> I noticed that if we folio_end_writeback() in fuse_writepage_finish()
>>> (rather than fuse_writepage_free()), there's ~50% buffer write
>>> bandwridth performance gain (5500MB -> 8500MB)[*]
>>>
>>> The fuse server is generally implemented in multi-thread style, and
>>> multi (fuse server) worker threads could fetch and process FUSE_WRITE
>>> requests of one fuse inode. Then there's serious lock contention for
>>> the xarray lock (of the address space) when these multi worker threads
>>> call fuse_writepage_end->folio_end_writeback when they are sending
>>> replies of FUSE_WRITE requests.
>>>
>>> The lock contention is greatly alleviated when folio_end_writeback() is
>>> serialized with fi->lock. IOWs in the current implementation
>>> (folio_end_writeback() in fuse_writepage_free()), each worker thread
>>> needs to compete for the xarray lock for 256 times (one fuse request can
>>> contain at most 256 pages if FUSE_MAX_MAX_PAGES is 256) when completing
>>> a FUSE_WRITE request.
>>>
>>> After moving folio_end_writeback() to fuse_writepage_finish(), each
>>> worker thread needs to compete for fi->lock only once. IOWs the locking
>>> granularity is larger now.
>>>
>>
>> Interesting! Thanks for sharing. Are you able to consistently repro
>> these results and on different machines? When I run it locally on my
>> machine using the commands you shared, I'm seeing roughly the same
>> throughput:
>>
>> Current implementation (folio_end_writeback() in fuse_writepage_free()):
>> WRITE: bw=385MiB/s (404MB/s), 385MiB/s-385MiB/s (404MB/s-404MB/s),
>> io=113GiB (121GB), run=300177-300177msec
>> WRITE: bw=384MiB/s (403MB/s), 384MiB/s-384MiB/s (403MB/s-403MB/s),
>> io=113GiB (121GB), run=300178-300178msec
>>
>> fuse_end_writeback() in fuse_writepage_finish():
>> WRITE: bw=387MiB/s (406MB/s), 387MiB/s-387MiB/s (406MB/s-406MB/s),
>> io=113GiB (122GB), run=300165-300165msec
>> WRITE: bw=381MiB/s (399MB/s), 381MiB/s-381MiB/s (399MB/s-399MB/s),
>> io=112GiB (120GB), run=300143-300143msec
>>
>> I wonder if it's because your machine is so much faster that lock
>> contention makes a difference for you whereas on my machine there's
>> other things that slow it down before lock contention comes into play.
>
> Yeah, I agree that the lock contention matters only when the writeback
> kworker consumes 100% CPU, i.e. when the writeback kworker is the
> bottleneck. To expose that, the passthrough_hp daemon works in
> benchmark[*] mode (I noticed that passthrough_hp can be the bottleneck
> when disabling "--bypass-rw" mode).
>
> [*]
> https://github.com/libfuse/libfuse/pull/807/commits/e83789cc6e83ca42ccc9899c4f7f8c69f31cbff9
>
>
>>
>> I see your point about why it would make sense that having
>> folio_end_writeback() in fuse_writepage_finish() inside the scope of
>> the fi->lock could make it faster, but I also could see how having it
>> outside the lock could make it faster as well. I'm thinking about the
>> scenario where if there's 8 threads all executing
>> fuse_send_writepage() at the same time, calling folio_end_writeback()
>> outside the fi->lock would unblock other threads trying to get the
>> fi->lock and that other thread could execute while
>> folio_end_writeback() gets executed.
>>
>> Looking at it some more, it seems like it'd be useful if there was
>> some equivalent api to folio_end_writeback() that takes in an array of
>> folios and would only need to grab the xarray lock once to clear
>> writeback on all the folios in the array.
>
> Yes it's exactly what we need.
>
>
>>
>> When fuse supports large folios [*] this will help lock contention on
>> the xarray lock as well because there'll be less folio_end_writeback()
>> calls.
>
> Cool, it definitely helps.
>
>
>>
>> I'm happy to move the fuse_end_writeback() call to
>> fuse_writepage_finish() considering what you're seeing. 5500 Mb ->
>> 8800 Mb is a huge perf improvement!
>
> This statistics is tested in benchmark ("--bypass-rw") mode. When
> disabling "--bypass-rw" mode and testing fuse passthrough_hp over a ext4
> over nvme, the performance gain is ~10% (4009MB/s ->4428MB/s).
Thanks for Jingbo and Joanne for looking into this! In typical HPC case
one expects currently >15GB/s for a single client - Infiniband RDMA
and possibly multi rail. The --bypass-rw is rather close to that.
Thanks,
Bernd
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2024-11-21 10:11 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-11-15 22:44 [PATCH v5 0/5] fuse: remove temp page copies in writeback Joanne Koong
2024-11-15 22:44 ` [PATCH v5 1/5] mm: add AS_WRITEBACK_INDETERMINATE mapping flag Joanne Koong
2024-11-15 23:11 ` Shakeel Butt
2024-11-15 22:44 ` [PATCH v5 2/5] mm: skip reclaiming folios in legacy memcg writeback indeterminate contexts Joanne Koong
2024-11-15 22:44 ` [PATCH v5 3/5] fs/writeback: in wait_sb_inodes(), skip wait for AS_WRITEBACK_INDETERMINATE mappings Joanne Koong
2024-11-20 12:07 ` Jingbo Xu
2024-11-15 22:44 ` [PATCH v5 4/5] mm/migrate: skip migrating folios under writeback with " Joanne Koong
2024-11-15 23:12 ` Shakeel Butt
2024-11-15 22:44 ` [PATCH v5 5/5] fuse: remove tmp folio for writebacks and internal rb tree Joanne Koong
2024-11-19 7:59 ` Jingbo Xu
2024-11-20 21:07 ` Joanne Koong
2024-11-20 9:56 ` Jingbo Xu
2024-11-20 21:53 ` Joanne Koong
2024-11-21 3:08 ` Jingbo Xu
2024-11-21 10:11 ` Bernd Schubert
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox