From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BCDEC3ABDA for ; Wed, 14 May 2025 23:45:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BDC7F6B0088; Wed, 14 May 2025 19:43:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B634F6B00E0; Wed, 14 May 2025 19:43:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 968AE6B00E1; Wed, 14 May 2025 19:43:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 6FA316B0088 for ; Wed, 14 May 2025 19:43:56 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id B2AC61411ED for ; Wed, 14 May 2025 23:43:57 +0000 (UTC) X-FDA: 83443143714.11.2ED5E87 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf05.hostedemail.com (Postfix) with ESMTP id E5F35100006 for ; Wed, 14 May 2025 23:43:55 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Wu6o8vnD; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of 3uiolaAsKCOsNPXReYRlgaTTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--ackerleytng.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3uiolaAsKCOsNPXReYRlgaTTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--ackerleytng.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747266236; a=rsa-sha256; cv=none; b=mbJLYqmgews+MJQq+vjZ0NGh6RZ4zLFBigkuF2kshZnhO7g0G2xGApz6NMmZv0/CECUe69 RTcoBvJFFwLbxEmtK2fdQfqcgj+rVoj+4KkW01tUDjIRSz039CAf7zp+GzgOzkEwyEMCFp DppjX7NEUZnluJsRl5LG1capHCJ0KCg= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Wu6o8vnD; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of 3uiolaAsKCOsNPXReYRlgaTTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--ackerleytng.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3uiolaAsKCOsNPXReYRlgaTTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--ackerleytng.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747266236; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uuq1bEexdRhfFEX1jCABHc+UNgsIInH8N9P0pZ09IXA=; b=ANxsj4x1/bOWNH0LiJq3ZTc0Un+/jpfmFXoXDFMQWwWb+Qlqc/lz30SJuEZGSnA3SBielH 18jZQzwTgGXojLUxDh3mDzT79QPUM4SRqdA0+vDVvhGiBAs+Xs5c6NocTU8HNsIa+Tlxww pldC9KelNLXBE/5uuAwRgyzXGwFYNAM= Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-742360cec7eso283015b3a.2 for ; Wed, 14 May 2025 16:43:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747266235; x=1747871035; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uuq1bEexdRhfFEX1jCABHc+UNgsIInH8N9P0pZ09IXA=; b=Wu6o8vnDtUDvmx58mcg1kGstSbCLksEU1trArindor2IuO5i85VZAVzTOkqrgEuTel zXCtPbHoVmW0QYNMEascdmDwblh9BKBh5E1U+brU9L8sLTx8gYsq3ewR1cnf2et2svMD 6fQiiG/Nwu3TbyCl+C0fbwu+bojGevgX9aQA3tfaZWujOB2/HFGK9koiUSlimD4ny/XX DQj5TBvQjHxh/K6eDchcd3x/7sZl6AqCv2wu6l1Oyl2HSX2Wr5VzwIvGWJyMBoxTQqGP xBQksOJCHePmStNMkEoXlCR/fDzjGT6mkRjEopsSprEbqzbR5D4onDuYwYm9u2odtbm7 ZSDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747266235; x=1747871035; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uuq1bEexdRhfFEX1jCABHc+UNgsIInH8N9P0pZ09IXA=; b=UQx9osu/EvZ2F86K/zS4bpu34zLSvoLBX8gHTiWOTKdc7neBTBmua0ELqbdNOBL/0h 1RWPo3CWBT3XlLadcAj/GcRx7XjMytylXd0XqhxS8T/PqMuWF+nyLX8+/tHb6QYOTRQP RGrG/1UTKz36Dbd51dkQxl1uXJRQHXvxnDCCLTkueh8zHJCZQVHeyo36biL1EulaiWyS Ow4ozc3EMEouGt/lua1Y410WnR3W4IiLoKUtUlykt/JHXYTcBnZCoTQoxmrDwXe8T42r NRxS8EwP5KpfMbL9AwI0ALvtILRhZg+f8Ic5rFKAfZJwnXa2cnuBapNl06L7zlA0KCTJ ksYA== X-Forwarded-Encrypted: i=1; AJvYcCW5hBHMiHfc3Qz3QlV+P3fORFCyTcR/ZKckP9TwXFALPV66W5Orzj4vNEQ7lW6gBZs6lFCRbOFFOw==@kvack.org X-Gm-Message-State: AOJu0YwRfUPnvQ3Q/OyQyStRQjj48P8IhkCGM/RL8SfjwFnLkx9VWd3n TtgrLHDxpLAH9r+Ld1JmpcErur2XtlvbuG37Yq90aP9IMi+51hIZszthiHCUFIESrmqBI6ywpkl /0U214VY4IzO4StJYPO1Fqw== X-Google-Smtp-Source: AGHT+IEP7NX45yVSRggtsTELkRXbv8SEWvt+7ORsl2x8HhwmdlC6ogf+zGJHadz5Krd2YgHwh61qzF+l2O1bbO5BGw== X-Received: from pfmv16.prod.google.com ([2002:a62:a510:0:b0:736:3cd5:ba3f]) (user=ackerleytng job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:181c:b0:740:9d7c:aeb9 with SMTP id d2e1a72fcca58-74289377cb0mr7546070b3a.21.1747266234724; Wed, 14 May 2025 16:43:54 -0700 (PDT) Date: Wed, 14 May 2025 16:42:16 -0700 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: Subject: [RFC PATCH v2 37/51] filemap: Pass address_space mapping to ->free_folio() From: Ackerley Tng To: kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org Cc: ackerleytng@google.com, aik@amd.com, ajones@ventanamicro.com, akpm@linux-foundation.org, amoorthy@google.com, anthony.yznaga@oracle.com, anup@brainfault.org, aou@eecs.berkeley.edu, bfoster@redhat.com, binbin.wu@linux.intel.com, brauner@kernel.org, catalin.marinas@arm.com, chao.p.peng@intel.com, chenhuacai@kernel.org, dave.hansen@intel.com, david@redhat.com, dmatlack@google.com, dwmw@amazon.co.uk, erdemaktas@google.com, fan.du@intel.com, fvdl@google.com, graf@amazon.com, haibo1.xu@intel.com, hch@infradead.org, hughd@google.com, ira.weiny@intel.com, isaku.yamahata@intel.com, jack@suse.cz, james.morse@arm.com, jarkko@kernel.org, jgg@ziepe.ca, jgowans@amazon.com, jhubbard@nvidia.com, jroedel@suse.de, jthoughton@google.com, jun.miao@intel.com, kai.huang@intel.com, keirf@google.com, kent.overstreet@linux.dev, kirill.shutemov@intel.com, liam.merwick@oracle.com, maciej.wieczor-retman@intel.com, mail@maciej.szmigiero.name, maz@kernel.org, mic@digikod.net, michael.roth@amd.com, mpe@ellerman.id.au, muchun.song@linux.dev, nikunj@amd.com, nsaenz@amazon.es, oliver.upton@linux.dev, palmer@dabbelt.com, pankaj.gupta@amd.com, paul.walmsley@sifive.com, pbonzini@redhat.com, pdurrant@amazon.co.uk, peterx@redhat.com, pgonda@google.com, pvorel@suse.cz, qperret@google.com, quic_cvanscha@quicinc.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, quic_svaddagi@quicinc.com, quic_tsoni@quicinc.com, richard.weiyang@gmail.com, rick.p.edgecombe@intel.com, rientjes@google.com, roypat@amazon.co.uk, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, steven.sistare@oracle.com, suzuki.poulose@arm.com, tabba@google.com, thomas.lendacky@amd.com, usama.arif@bytedance.com, vannapurve@google.com, vbabka@suse.cz, viro@zeniv.linux.org.uk, vkuznets@redhat.com, wei.w.wang@intel.com, will@kernel.org, willy@infradead.org, xiaoyao.li@intel.com, yan.y.zhao@intel.com, yilun.xu@intel.com, yuzenghui@huawei.com, zhiquan1.li@intel.com, Mike Day Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: uwnsmjbpmc56fznehqpgh4o7wahwwun8 X-Rspam-User: X-Rspamd-Queue-Id: E5F35100006 X-Rspamd-Server: rspam06 X-HE-Tag: 1747266235-218307 X-HE-Meta: U2FsdGVkX18xO9RZ11YsKKv/cGCDucpsOgtVRfloCa/+eBOrJhfwHm6KYITtsM9B0BBlC/2ssqW2quOrMZT/oo6vGHaWy7J8+bCuQJsQmBRrbWH9lSDIZYzAaNPtnfzhQw8vqeGGNCVQmCMJ1ZBTCINxIyBvxkIqvW9zWWGMRwyIteVRCmLhn8ksckY3Lv3XpyIUhOCQd1nUrzRidgsbuI1FlA3gU4aIDxhW4v1LTxhAh5DMkgfST9JF2zWzlXxFSqN/heGMO3D3tN6hAuH5gIR8lQOP6akMwwWtXygDxiSEqVo1cyy6BeEY3u+XO1N49kUG+wAy4uF4ZUhg2E8kv465k7jFfkwnZf6acrMa+KQd07h6UM474vUqq852TvImkbw14swkJDD6x8wi1j15XtMoeKUx1c3W/Tdh7TbZYRR68D/ucNk8FaCN5Z79CM4/eaP6C3060gHH3LtKTLtRj7Hgt0VS8XkH34EsC4XInel3Cw+UYo5itKtE3DWnfNcINRlcO0blvvUkDlKi+m+d3WszXzBUb0M4/4RhDZHLRCiFZzz7iqe9bHuPS/cTi2b0l+/Nv0Dokr0FBUVJsznW/DEmBFY6KNPaCruez420Y8miifXpyxsMI0N1nn0rQvchxlM7Rw4J4UK7jZBcdTfsYpdT0N5Ye1KjxYP9zIxHynOiwg15SzQEQ1O9AUxkWP/m+5oJBJAp3GIE7RSCvQdb9TMKh/uBFgb81fVm5AhLBXDaiZIis/VAFlJKbBCSUiKgS6u6ZqL+PlPbUf5z/Iy1t6dEtRGV/JSDvN/+toA3drjQbZ8vwtt4YrFgGG+Uh89my8EufHMv2GiqhCVIaX3B6xUfHk8WQc3heqY7yROULNcWiuflnlhIeOKlynJxQa4EMnH+YcOK8IkAZD5VvyOw0YelySNSCy6+W0yon4EbRCDmnN4YxFvp3ZOVN/Z9f4NEo6evjx6ZoDWtkqfAF3q Q+uE/QfW 00ULcNZt0sWY2Skk//WwPLq6as4cV9DEGmziLjjONKWq+HUqHXX0weqlUgrTVkAJrVRDIYzABfgNA3pjR/HjW2HtSL8kcd9LD5IGNMsaBZvqY/xJUn949bzEyFZXkW4V7m+c2qvbuRtDWpsjpuf+76+bhNjvL7NY31PYgqSWElnyPKL8RdFiVpKsH3kj7sWzIp/lmwhodjO360jG2Splt7z29ihsmtj3l9xT4g/tBjQ07QBMYJHhRTTK7rawVzyu6noJxaCIvMzPgzMue8u/LrHI4SYU4uAoGiD7znj7lV/vH5Af/QtfCrcqbYO+mv2WculJDr+Kk83Sr8nhUCCeQuIyNarKTKCdDl/YtA75Cvea5gUajr2JltDXPOx1nCq9nemCaAwcDhFAAK8lqP9DmkqWjMcokkP7gFFiLxTK2B1QXVTK+5bdWX877Wvt2/hZaKwX0rKZU1IuCql7dcKv48jNan0aolRWW3pdeZtKCfeMLNEnxOiNKJbK/quYqzkNLHCptwYMGulTiw/VSQ8sGMJ833K3L0nOVkbRFmL/9BwQXtdmFFMAwneoCZvPwhzbtrFD/Itan1DNHT1V/GRYG05Q9lxJ4OHam+6wDCycMRBbHvqsZylbGtsGHowL2z5B/NFqaGMp4PUqAnyccw2zAIOhIsNKxDeAVa3XAAgAqZQtdTzEnw5d8ZZwVZRJNu9ficvKO4RQD9ozjr2dAG/AewJ9SmIPM5zn0dHlyV8zdaK4OrVCaUHaPICTuH6lUoxBlwBZVjGHbi5Py2U2mUAggodiBY5SL567nBYRQCzIr0Lgy055FmwePHmoo3qZV6m0Ws1Mj2Hcz3XgvAeY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Elliot Berman The plan is to be able to support multiple allocators for guest_memfd folios. To allow each allocator to handle release of a folio from a guest_memfd filemap, ->free_folio() needs to retrieve allocator information that is stored on the guest_memfd inode. ->free_folio() shouldn't assume that folio->mapping is set/valid, and the mapping is well-known to callers of .free_folio(). Hence, pass address_space mapping to ->free_folio() for the callback to retrieve any necessary information. Link: https://lore.kernel.org/all/15f665b4-2d33-41ca-ac50-fafe24ade32f@redhat.com/ Suggested-by: David Hildenbrand Acked-by: David Hildenbrand Change-Id: I8bac907832a0b2491fa403a6ab72fcef1b4713ee Signed-off-by: Elliot Berman Tested-by: Mike Day Signed-off-by: Ackerley Tng --- Documentation/filesystems/locking.rst | 2 +- Documentation/filesystems/vfs.rst | 15 +++++++++------ fs/nfs/dir.c | 9 +++++++-- fs/orangefs/inode.c | 3 ++- include/linux/fs.h | 2 +- mm/filemap.c | 9 +++++---- mm/secretmem.c | 3 ++- mm/vmscan.c | 4 ++-- virt/kvm/guest_memfd.c | 3 ++- 9 files changed, 31 insertions(+), 19 deletions(-) diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst index 0ec0bb6eb0fb..c3d7430481ae 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -263,7 +263,7 @@ prototypes:: sector_t (*bmap)(struct address_space *, sector_t); void (*invalidate_folio) (struct folio *, size_t start, size_t len); bool (*release_folio)(struct folio *, gfp_t); - void (*free_folio)(struct folio *); + void (*free_folio)(struct address_space *, struct folio *); int (*direct_IO)(struct kiocb *, struct iov_iter *iter); int (*migrate_folio)(struct address_space *, struct folio *dst, struct folio *src, enum migrate_mode); diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystems/vfs.rst index ae79c30b6c0c..bba1ac848f96 100644 --- a/Documentation/filesystems/vfs.rst +++ b/Documentation/filesystems/vfs.rst @@ -833,7 +833,7 @@ cache in your filesystem. The following members are defined: sector_t (*bmap)(struct address_space *, sector_t); void (*invalidate_folio) (struct folio *, size_t start, size_t len); bool (*release_folio)(struct folio *, gfp_t); - void (*free_folio)(struct folio *); + void (*free_folio)(struct address_space *, struct folio *); ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter); int (*migrate_folio)(struct mapping *, struct folio *dst, struct folio *src, enum migrate_mode); @@ -1011,11 +1011,14 @@ cache in your filesystem. The following members are defined: clear the uptodate flag if it cannot free private data yet. ``free_folio`` - free_folio is called once the folio is no longer visible in the - page cache in order to allow the cleanup of any private data. - Since it may be called by the memory reclaimer, it should not - assume that the original address_space mapping still exists, and - it should not block. + free_folio is called once the folio is no longer visible in + the page cache in order to allow the cleanup of any private + data. Since it may be called by the memory reclaimer, it + should not assume that the original address_space mapping + still exists at folio->mapping. The mapping the folio used to + belong to is instead passed for free_folio to read any + information it might need from the mapping. free_folio should + not block. ``direct_IO`` called by the generic read/write routines to perform direct_IO - diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c index bd23fc736b39..148433f6d9d4 100644 --- a/fs/nfs/dir.c +++ b/fs/nfs/dir.c @@ -55,7 +55,7 @@ static int nfs_closedir(struct inode *, struct file *); static int nfs_readdir(struct file *, struct dir_context *); static int nfs_fsync_dir(struct file *, loff_t, loff_t, int); static loff_t nfs_llseek_dir(struct file *, loff_t, int); -static void nfs_readdir_clear_array(struct folio *); +static void nfs_free_folio(struct address_space *, struct folio *); static int nfs_do_create(struct inode *dir, struct dentry *dentry, umode_t mode, int open_flags); @@ -69,7 +69,7 @@ const struct file_operations nfs_dir_operations = { }; const struct address_space_operations nfs_dir_aops = { - .free_folio = nfs_readdir_clear_array, + .free_folio = nfs_free_folio, }; #define NFS_INIT_DTSIZE PAGE_SIZE @@ -230,6 +230,11 @@ static void nfs_readdir_clear_array(struct folio *folio) kunmap_local(array); } +static void nfs_free_folio(struct address_space *mapping, struct folio *folio) +{ + nfs_readdir_clear_array(folio); +} + static void nfs_readdir_folio_reinit_array(struct folio *folio, u64 last_cookie, u64 change_attr) { diff --git a/fs/orangefs/inode.c b/fs/orangefs/inode.c index 5ac743c6bc2e..884cc5295f3e 100644 --- a/fs/orangefs/inode.c +++ b/fs/orangefs/inode.c @@ -449,7 +449,8 @@ static bool orangefs_release_folio(struct folio *folio, gfp_t foo) return !folio_test_private(folio); } -static void orangefs_free_folio(struct folio *folio) +static void orangefs_free_folio(struct address_space *mapping, + struct folio *folio) { kfree(folio_detach_private(folio)); } diff --git a/include/linux/fs.h b/include/linux/fs.h index 0fded2e3c661..9862ea92a2af 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -455,7 +455,7 @@ struct address_space_operations { sector_t (*bmap)(struct address_space *, sector_t); void (*invalidate_folio) (struct folio *, size_t offset, size_t len); bool (*release_folio)(struct folio *, gfp_t); - void (*free_folio)(struct folio *folio); + void (*free_folio)(struct address_space *mapping, struct folio *folio); ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter); /* * migrate the contents of a folio to the specified target. If diff --git a/mm/filemap.c b/mm/filemap.c index bed7160db214..a02c3d8e00e8 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -226,11 +226,11 @@ void __filemap_remove_folio(struct folio *folio, void *shadow) void filemap_free_folio(struct address_space *mapping, struct folio *folio) { - void (*free_folio)(struct folio *); + void (*free_folio)(struct address_space*, struct folio *); free_folio = mapping->a_ops->free_folio; if (free_folio) - free_folio(folio); + free_folio(mapping, folio); folio_put_refs(folio, folio_nr_pages(folio)); } @@ -820,7 +820,8 @@ EXPORT_SYMBOL(file_write_and_wait_range); void replace_page_cache_folio(struct folio *old, struct folio *new) { struct address_space *mapping = old->mapping; - void (*free_folio)(struct folio *) = mapping->a_ops->free_folio; + void (*free_folio)(struct address_space *, struct folio *) = + mapping->a_ops->free_folio; pgoff_t offset = old->index; XA_STATE(xas, &mapping->i_pages, offset); @@ -849,7 +850,7 @@ void replace_page_cache_folio(struct folio *old, struct folio *new) __lruvec_stat_add_folio(new, NR_SHMEM); xas_unlock_irq(&xas); if (free_folio) - free_folio(old); + free_folio(mapping, old); folio_put(old); } EXPORT_SYMBOL_GPL(replace_page_cache_folio); diff --git a/mm/secretmem.c b/mm/secretmem.c index c0e459e58cb6..178507c1b900 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -152,7 +152,8 @@ static int secretmem_migrate_folio(struct address_space *mapping, return -EBUSY; } -static void secretmem_free_folio(struct folio *folio) +static void secretmem_free_folio(struct address_space *mapping, + struct folio *folio) { set_direct_map_default_noflush(&folio->page); folio_zero_segment(folio, 0, folio_size(folio)); diff --git a/mm/vmscan.c b/mm/vmscan.c index 3783e45bfc92..b8add4d0cf18 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -788,7 +788,7 @@ static int __remove_mapping(struct address_space *mapping, struct folio *folio, xa_unlock_irq(&mapping->i_pages); put_swap_folio(folio, swap); } else { - void (*free_folio)(struct folio *); + void (*free_folio)(struct address_space *, struct folio *); free_folio = mapping->a_ops->free_folio; /* @@ -817,7 +817,7 @@ static int __remove_mapping(struct address_space *mapping, struct folio *folio, spin_unlock(&mapping->host->i_lock); if (free_folio) - free_folio(folio); + free_folio(mapping, folio); } return 1; diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 24d270b9b725..c578d0ebe314 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -1319,7 +1319,8 @@ static void kvm_gmem_invalidate(struct folio *folio) static inline void kvm_gmem_invalidate(struct folio *folio) {} #endif -static void kvm_gmem_free_folio(struct folio *folio) +static void kvm_gmem_free_folio(struct address_space *mapping, + struct folio *folio) { folio_clear_unevictable(folio); -- 2.49.0.1045.g170613ef41-goog