From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4E6ED743E5 for ; Wed, 20 Nov 2024 21:53:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E4FD66B0089; Wed, 20 Nov 2024 16:53:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DFEFA6B008A; Wed, 20 Nov 2024 16:53:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC77D6B008C; Wed, 20 Nov 2024 16:53:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id AC82E6B0089 for ; Wed, 20 Nov 2024 16:53:36 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 24C2080FE1 for ; Wed, 20 Nov 2024 21:53:36 +0000 (UTC) X-FDA: 82807822944.07.A068391 Received: from mail-vk1-f178.google.com (mail-vk1-f178.google.com [209.85.221.178]) by imf04.hostedemail.com (Postfix) with ESMTP id 9AAC340006 for ; Wed, 20 Nov 2024 21:52:28 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ZWxJhNNh; spf=pass (imf04.hostedemail.com: domain of joannelkoong@gmail.com designates 209.85.221.178 as permitted sender) smtp.mailfrom=joannelkoong@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732139411; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FNqRnLkdM/MKN7Sbep9kEEBFrTQCIKbfDVRooqcVko8=; b=ARdrA93OXRRu9HNoylcmHpKFP7m3XhE2ri4ancTgbcJrVnhs7rYeOJcXSrcj9D7nY4s/sd haWGjx+fTWka/cdk3oDUpO7X/7MR3N+YWNcchotLAsKZdTyVW8n/0Cz8A0qDAXzj20ddPN 5JBPZ+eGSlkMeEFGChXvRfPLmXEA4yM= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ZWxJhNNh; spf=pass (imf04.hostedemail.com: domain of joannelkoong@gmail.com designates 209.85.221.178 as permitted sender) smtp.mailfrom=joannelkoong@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732139411; a=rsa-sha256; cv=none; b=rZnDhj2vNPzpI5B1h6zRGsWWu8mzbO047c7iPkoGeHDMmUIWckAz6NpX2B+1REvHBcDsqf vDlTmGyqbUxMCg5rgnZ42RgevBB5KMlU6rQBImTH6HGymCOkTwqA5XioG/rQkO+lNvJFR7 ST0kONGxuOhC9KLgUaQQKT2PhqinA/I= Received: by mail-vk1-f178.google.com with SMTP id 71dfb90a1353d-50d399891d6so93074e0c.1 for ; Wed, 20 Nov 2024 13:53:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732139613; x=1732744413; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=FNqRnLkdM/MKN7Sbep9kEEBFrTQCIKbfDVRooqcVko8=; b=ZWxJhNNh4Ei08BBF5+Otgr1T228YWItMSAizPFWoOi710IAuC3ElM81vzBIZIus8sx p94EiX+6XUDChMH/1cLhbsN1avL2gOQdygX9IBePM2p8X48Fa1KkOebq+QuRDUhwDCxf MEv7lrU6UYVBIgOnUHXJolDLYP5ShLh9JNZ1rEVxXVJcPkJFhk45sQ392g19cJTjj+Tf x6enUzwEa7A6ymHEucUm95d90JOGUchwpfrisqRQgEz2UwFjYiNxmiDKLWA2lGQx16Nw Vq7mAXDY0kfbk35q4m1cOgPPWgAo81rq3wdNUHtVgcaaOiz0vLX3+Dswm0/JXc/mcqeL JX/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732139613; x=1732744413; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FNqRnLkdM/MKN7Sbep9kEEBFrTQCIKbfDVRooqcVko8=; b=KIxMUYZ8OZxTpZH31otdq3lKC6X05f5tUAE/3CmlU7uty2T4BLRVb0sdu7+GNCH1jr 2PHmQutUYAohmBN8InuI5uAKuNHbzxGhXDoBqZD8ASGwev0UQPcsQbT3/5z/zZRyfP5J PjJ8NwapglAeS1BKk1QNSVKZIUw4DRdzIbXbVIwIGk/Y+tjmv1aVSxnqCGDBx1b8fI5E LQlRRziivvLiDYPN0G81yngLbJDesOxdhVuaakz7OvXqyQxhi/onnoOswMFID3D27G+n eS+OO0GkBNhhtSGAcGW3ANcspadoU2aPcIF0ICGM6gGamqWRxr0J/ZFCd402Bhw0K4lg 6x5g== X-Forwarded-Encrypted: i=1; AJvYcCVFKPYgasVWmSVFnlgsK5Uh0PtR/xILI5GAKR2SG11O5nHLV1vnwF8I6aGi68NS8d0UlB9ubWdOGw==@kvack.org X-Gm-Message-State: AOJu0Yy94kY4bsrncisz6EnV76RRSEWllAot++5CjWLxwRLS8Cv038Ul qRUrVj5C2s7nRkE+CPIyKRA5y9ryT4nTcZ8uX7u0pA9SFAgAtjmdu0fVfa9NeAggqouozJkN7Y9 HYoIq3s+JWNfvWGC6LKT5rH2mY7s= X-Google-Smtp-Source: AGHT+IFSgJi24wLaX+gTeo0PZKH1fBVzpcPBaSH3lmAcHsg4m1cvDTt/W9SIKJP9VDnUS2p7iECfvEcaPEq44ffTg3w= X-Received: by 2002:a05:6102:3ec4:b0:4ad:48f4:8be1 with SMTP id ada2fe7eead31-4adaf60e2d5mr4213082137.25.1732139613318; Wed, 20 Nov 2024 13:53:33 -0800 (PST) MIME-Version: 1.0 References: <20241115224459.427610-1-joannelkoong@gmail.com> <20241115224459.427610-6-joannelkoong@gmail.com> In-Reply-To: From: Joanne Koong Date: Wed, 20 Nov 2024 13:53:22 -0800 Message-ID: Subject: Re: [PATCH v5 5/5] fuse: remove tmp folio for writebacks and internal rb tree To: Jingbo Xu Cc: miklos@szeredi.hu, linux-fsdevel@vger.kernel.org, shakeel.butt@linux.dev, josef@toxicpanda.com, linux-mm@kvack.org, bernd.schubert@fastmail.fm, kernel-team@meta.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 9AAC340006 X-Stat-Signature: npx7k83pnqt1rrowp844jyzwec7j4t6b X-Rspam-User: X-HE-Tag: 1732139548-597949 X-HE-Meta: U2FsdGVkX1+bdGk89oHTpF2rk7EVBgns+a6rzTI8cGXCcjDrNkTt3YZu8+oH17hv0IYWOy/gE1j8yMSbgrf4l9VouFZ6hYLoqbPlbpIVzfH/5H6E0cXXJjttvjvVsH43VHE7xtR8NX8oLukgj2/UQPbTXNc79st/Dib6dVnUvfu3a/v4aYgD3mJfErvTzB3uuz2YMjPhe64IbWcd/zBm3WzgqdibxFuGWa7EiS3ToRhQ6eNcG079qOflDTx7qPVUXP49g4UwFED8C3ReDvLbWWmb8i2bOvzHRsPFK4CrmOlB377Yri7G8wcOgV+n55eVEzfNf1EqdhKNwz3+o0Ub488NFMV1pZKTWDK4qGTOzXDqYJLyG5qjiKt1fi7TxONP+qg7VaJcd9N4uwbZJD1Exn0Er4/+O7PxZZ2JrvXnMZ6eKOfaXEgaSJYVSPcN8BzAByU1XVlHEyounjVOYCjv7OpNja79UVbLW+cIVonD3QGOv3n/ob9AWTuumxMgzhYjKfbFuCRt+QegYQhukRuN9pk791CJjXj31Jy+iEBiSdkX3piH31oJVHmSQt9E69f+QPYmnRTQ54Fzj7JS+uBXKyURUC45viR0h64I4xhxDmPvMGYN3TjcH6uh9W9taaGwFcPXm+xaVBsqIy3orIqqNIp5EhKgLQJxlFQZZvdUCTB+mRHiXqWa1X3kY+iXQudQMBUNFGRFoNM+5dXbNZJQ3e+8Jz9REJzE2oSXm4esH1uj06m19dIch9I/2eZQv9uiAbq3qkKi1eIDj90ZZE88+4G+SnkiZFFzeLYBSxg+wArXNtsM0dGUnh+/kLo+rIHwqVPz5AbtKf/9759fPOP7bw9Z2VAhRDQBTkn+pwy8r17G/MGfszQ8+tVdxs7k6RGVAPxwk9HsdrhEkDn+R4k/UFmSzpn1xxTfTlO23jiGqqfzN6EveDcSIKs6EnXOptqVjw3McAaxOrq9bJFw2sK 7HC+62hS vsQ8mXfOaP6zIdWBQ3f3i0sRgbgE61au/dvANlJj77a5ShCNX/NW/CX4RTN7StcAg3zz92fzafwXywCJ4PKaDzuhUnEeTj/234x9ejnfZdsgS4Ugk+Xj8fOjAQYfu1jOj0LiPzvOlcGTFvPfXIf3hbjDjEDIFScfPXR7rJGPjWl9HArpt4yvdFjJ1FKwPNhy3kJWWdthAwvxDpXoN6qXAJosUDtf2a3S15vP2rNgAGqsUFQu3q7+KrMO9x8FOnzosYBqIM+ffZw+f6Bkb3B1WwcvQINT/o5bIANNQhVeT2Byy11iK2MUmHUjLtR99t7+VB8YzXxGmEetMke1/o8nn7dQdMAREV+YwP3Qx4lem1cRYsf63AOyO2Jko7Oc9MSb9zJ/p X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Nov 20, 2024 at 1:56=E2=80=AFAM Jingbo Xu wrote: > > On 11/16/24 6:44 AM, Joanne Koong wrote: > > In the current FUSE writeback design (see commit 3be5a52b30aa > > ("fuse: support writable mmap")), a temp page is allocated for every > > dirty page to be written back, the contents of the dirty page are copie= d over > > to the temp page, and the temp page gets handed to the server to write = back. > > > > This is done so that writeback may be immediately cleared on the dirty = page, > > and this in turn is done for two reasons: > > a) in order to mitigate the following deadlock scenario that may arise > > if reclaim waits on writeback on the dirty page to complete: > > * single-threaded FUSE server is in the middle of handling a request > > that needs a memory allocation > > * memory allocation triggers direct reclaim > > * direct reclaim waits on a folio under writeback > > * the FUSE server can't write back the folio since it's stuck in > > direct reclaim > > b) in order to unblock internal (eg sync, page compaction) waits on > > writeback without needing the server to complete writing back to disk, > > which may take an indeterminate amount of time. > > > > With a recent change that added AS_WRITEBACK_INDETERMINATE and mitigate= s > > the situations described above, FUSE writeback does not need to use > > temp pages if it sets AS_WRITEBACK_INDETERMINATE on its inode mappings. > > > > This commit sets AS_WRITEBACK_INDETERMINATE on the inode mappings > > and removes the temporary pages + extra copying and the internal rb > > tree. > > > > fio benchmarks -- > > (using averages observed from 10 runs, throwing away outliers) > > > > Setup: > > sudo mount -t tmpfs -o size=3D30G tmpfs ~/tmp_mount > > ./libfuse/build/example/passthrough_ll -o writeback -o max_threads=3D4= -o source=3D~/tmp_mount ~/fuse_mount > > > > fio --name=3Dwriteback --ioengine=3Dsync --rw=3Dwrite --bs=3D{1k,4k,1M}= --size=3D2G > > --numjobs=3D2 --ramp_time=3D30 --group_reporting=3D1 --directory=3D/roo= t/fuse_mount > > > > bs =3D 1k 4k 1M > > Before 351 MiB/s 1818 MiB/s 1851 MiB/s > > After 341 MiB/s 2246 MiB/s 2685 MiB/s > > % diff -3% 23% 45% > > > > Signed-off-by: Joanne Koong > > --- > > fs/fuse/file.c | 339 +++---------------------------------------------- > > 1 file changed, 20 insertions(+), 319 deletions(-) > > > > diff --git a/fs/fuse/file.c b/fs/fuse/file.c > > index 88d0946b5bc9..56289ac58596 100644 > > --- a/fs/fuse/file.c > > +++ b/fs/fuse/file.c > > @@ -1172,7 +1082,7 @@ static ssize_t fuse_send_write_pages(struct fuse_= io_args *ia, > > int err; > > > > for (i =3D 0; i < ap->num_folios; i++) > > - fuse_wait_on_folio_writeback(inode, ap->folios[i]); > > + folio_wait_writeback(ap->folios[i]); > > > > fuse_write_args_fill(ia, ff, pos, count); > > ia->write.in.flags =3D fuse_write_flags(iocb); > > @@ -1622,7 +1532,7 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, s= truct iov_iter *iter, > > return res; > > } > > } > > - if (!cuse && fuse_range_is_writeback(inode, idx_from, idx_to)) { > > + if (!cuse && filemap_range_has_writeback(mapping, pos, (pos + cou= nt - 1))) { > > if (!write) > > inode_lock(inode); > > fuse_sync_writes(inode); > > @@ -1825,7 +1735,7 @@ static void fuse_writepage_free(struct fuse_write= page_args *wpa) > > fuse_sync_bucket_dec(wpa->bucket); > > > > for (i =3D 0; i < ap->num_folios; i++) > > - folio_put(ap->folios[i]); > > + folio_end_writeback(ap->folios[i]); > > I noticed that if we folio_end_writeback() in fuse_writepage_finish() > (rather than fuse_writepage_free()), there's ~50% buffer write > bandwridth performance gain (5500MB -> 8500MB)[*] > > The fuse server is generally implemented in multi-thread style, and > multi (fuse server) worker threads could fetch and process FUSE_WRITE > requests of one fuse inode. Then there's serious lock contention for > the xarray lock (of the address space) when these multi worker threads > call fuse_writepage_end->folio_end_writeback when they are sending > replies of FUSE_WRITE requests. > > The lock contention is greatly alleviated when folio_end_writeback() is > serialized with fi->lock. IOWs in the current implementation > (folio_end_writeback() in fuse_writepage_free()), each worker thread > needs to compete for the xarray lock for 256 times (one fuse request can > contain at most 256 pages if FUSE_MAX_MAX_PAGES is 256) when completing > a FUSE_WRITE request. > > After moving folio_end_writeback() to fuse_writepage_finish(), each > worker thread needs to compete for fi->lock only once. IOWs the locking > granularity is larger now. > Interesting! Thanks for sharing. Are you able to consistently repro these results and on different machines? When I run it locally on my machine using the commands you shared, I'm seeing roughly the same throughput: Current implementation (folio_end_writeback() in fuse_writepage_free()): WRITE: bw=3D385MiB/s (404MB/s), 385MiB/s-385MiB/s (404MB/s-404MB/s), io=3D113GiB (121GB), run=3D300177-300177msec WRITE: bw=3D384MiB/s (403MB/s), 384MiB/s-384MiB/s (403MB/s-403MB/s), io=3D113GiB (121GB), run=3D300178-300178msec fuse_end_writeback() in fuse_writepage_finish(): WRITE: bw=3D387MiB/s (406MB/s), 387MiB/s-387MiB/s (406MB/s-406MB/s), io=3D113GiB (122GB), run=3D300165-300165msec WRITE: bw=3D381MiB/s (399MB/s), 381MiB/s-381MiB/s (399MB/s-399MB/s), io=3D112GiB (120GB), run=3D300143-300143msec I wonder if it's because your machine is so much faster that lock contention makes a difference for you whereas on my machine there's other things that slow it down before lock contention comes into play. I see your point about why it would make sense that having folio_end_writeback() in fuse_writepage_finish() inside the scope of the fi->lock could make it faster, but I also could see how having it outside the lock could make it faster as well. I'm thinking about the scenario where if there's 8 threads all executing fuse_send_writepage() at the same time, calling folio_end_writeback() outside the fi->lock would unblock other threads trying to get the fi->lock and that other thread could execute while folio_end_writeback() gets executed. Looking at it some more, it seems like it'd be useful if there was some equivalent api to folio_end_writeback() that takes in an array of folios and would only need to grab the xarray lock once to clear writeback on all the folios in the array. When fuse supports large folios [*] this will help lock contention on the xarray lock as well because there'll be less folio_end_writeback() calls. I'm happy to move the fuse_end_writeback() call to fuse_writepage_finish() considering what you're seeing. 5500 Mb -> 8800 Mb is a huge perf improvement! [*] https://lore.kernel.org/linux-fsdevel/20241109001258.2216604-1-joannelk= oong@gmail.com/ > > > > @@ -2367,54 +2111,23 @@ static int fuse_writepages_fill(struct folio *f= olio, > > data->wpa =3D NULL; > > } > > > > - err =3D -ENOMEM; > > - tmp_folio =3D folio_alloc(GFP_NOFS | __GFP_HIGHMEM, 0); > > - if (!tmp_folio) > > - goto out_unlock; > > - > > - /* > > - * The page must not be redirtied until the writeout is completed > > - * (i.e. userspace has sent a reply to the write request). Other= wise > > - * there could be more than one temporary page instance for each = real > > - * page. > > - * > > - * This is ensured by holding the page lock in page_mkwrite() whi= le > > - * checking fuse_page_is_writeback(). We already hold the page l= ock > > - * since clear_page_dirty_for_io() and keep it held until we add = the > > - * request to the fi->writepages list and increment ap->num_folio= s. > > - * After this fuse_page_is_writeback() will indicate that the pag= e is > > - * under writeback, so we can release the page lock. > > - */ > > if (data->wpa =3D=3D NULL) { > > err =3D -ENOMEM; > > wpa =3D fuse_writepage_args_setup(folio, data->ff); > > - if (!wpa) { > > - folio_put(tmp_folio); > > + if (!wpa) > > goto out_unlock; > > - } > > fuse_file_get(wpa->ia.ff); > > data->max_folios =3D 1; > > ap =3D &wpa->ia.ap; > > } > > folio_start_writeback(folio); > > There's also a lock contention for the xarray lock when calling > folio_start_writeback(). > > I also noticed a strange thing that, if we lock fi->lock and unlock > immediately, the write bandwidth improves by 5% (8500MB -> 9000MB). The Interesting! By lock fi->lock and unlock immediately, do you mean locking it, then unlocking it, then calling folio_start_writeback() or locking it, calling folio_start_writeback() and then unlocking it? Thanks, Joanne > palce where to insert the "locking fi->lock and unlocking" actually > doesn't matter. "perf lock contention" shows the lock contention for > the xarray lock is greatly alleviated, though I can't understand how it > is done quite well... > > As the performance gain is not significant (~5%), I think we can leave > this stange phenomenon aside for now. > > > > [*] test case: > ./passthrough_hp --bypass-rw 2 /tmp /mnt > (testbench mode in > https://github.com/libfuse/libfuse/pull/807/commits/e83789cc6e83ca42ccc98= 99c4f7f8c69f31cbff9 > bypass the buffer copy along with the persistence procedure) > > fio -fallocate=3D0 -numjobs=3D32 -iodepth=3D1 -ioengine=3Dsync -sync=3D0 > --direct=3D0 -rw=3Dwrite -bs=3D1M -size=3D100G --time_based --runtime=3D3= 00 > -directory=3D/mnt/ --group_reporting --name=3DFio > -- > Thanks, > Jingbo