From: "Gupta, Pankaj" <pankaj.gupta@amd.com>
To: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
Shivank Garg <shivankg@amd.com>,
seanjc@google.com, david@redhat.com, vbabka@suse.cz,
akpm@linux-foundation.org, shuah@kernel.org, pbonzini@redhat.com,
brauner@kernel.org, viro@zeniv.linux.org.uk
Cc: ackerleytng@google.com, paul@paul-moore.com, jmorris@namei.org,
serge@hallyn.com, pvorel@suse.cz, bfoster@redhat.com,
tabba@google.com, vannapurve@google.com, chao.gao@intel.com,
bharata@amd.com, nikunj@amd.com, michael.day@amd.com,
yan.y.zhao@intel.com, Neeraj.Upadhyay@amd.com,
thomas.lendacky@amd.com, michael.roth@amd.com, aik@amd.com,
jgg@nvidia.com, kalyazin@amazon.com, peterx@redhat.com,
jack@suse.cz, rppt@kernel.org, hch@infradead.org,
cgzones@googlemail.com, ira.weiny@intel.com, rientjes@google.com,
roypat@amazon.co.uk, ziy@nvidia.com, matthew.brost@intel.com,
joshua.hahnjy@gmail.com, rakie.kim@sk.com, byungchul@sk.com,
gourry@gourry.net, kent.overstreet@linux.dev,
ying.huang@linux.alibaba.com, apopple@nvidia.com,
chao.p.peng@intel.com, amit@infradead.org, ddutile@redhat.com,
dan.j.williams@intel.com, ashish.kalra@amd.com, gshan@redhat.com,
jgowans@amazon.com, papaluri@amd.com, yuzhao@google.com,
suzuki.poulose@arm.com, quic_eberman@quicinc.com,
aneeshkumar.kizhakeveetil@arm.com, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-security-module@vger.kernel.org, kvm@vger.kernel.org,
linux-kselftest@vger.kernel.org, linux-coco@lists.linux.dev
Subject: Re: [PATCH 1/2] filemap: Add a mempolicy argument to filemap_alloc_folio()
Date: Mon, 23 Jun 2025 08:13:39 +0200 [thread overview]
Message-ID: <9ec4c72e-8895-42d3-9b3c-0794fc9c5c34@amd.com> (raw)
In-Reply-To: <20250620143502.3055777-1-willy@infradead.org>
> guest_memfd needs to support memory policies so add an argument
> to filemap_alloc_folio(). All existing users pass NULL, the first
> user will show up later in this series.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com>
> ---
> fs/bcachefs/fs-io-buffered.c | 2 +-
> fs/btrfs/compression.c | 3 ++-
> fs/btrfs/verity.c | 2 +-
> fs/erofs/zdata.c | 2 +-
> fs/f2fs/compress.c | 2 +-
> include/linux/pagemap.h | 6 +++---
> mm/filemap.c | 13 +++++++++----
> mm/readahead.c | 2 +-
> 8 files changed, 19 insertions(+), 13 deletions(-)
>
> diff --git a/fs/bcachefs/fs-io-buffered.c b/fs/bcachefs/fs-io-buffered.c
> index 66bacdd49f78..392344232b16 100644
> --- a/fs/bcachefs/fs-io-buffered.c
> +++ b/fs/bcachefs/fs-io-buffered.c
> @@ -124,7 +124,7 @@ static int readpage_bio_extend(struct btree_trans *trans,
> if (folio && !xa_is_value(folio))
> break;
>
> - folio = filemap_alloc_folio(readahead_gfp_mask(iter->mapping), order);
> + folio = filemap_alloc_folio(readahead_gfp_mask(iter->mapping), order, NULL);
> if (!folio)
> break;
>
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index 48d07939fee4..8430ccf70887 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -475,7 +475,8 @@ static noinline int add_ra_bio_pages(struct inode *inode,
> }
>
> folio = filemap_alloc_folio(mapping_gfp_constraint(mapping,
> - ~__GFP_FS), 0);
> + ~__GFP_FS),
> + 0, NULL);
> if (!folio)
> break;
>
> diff --git a/fs/btrfs/verity.c b/fs/btrfs/verity.c
> index b7a96a005487..c43a789ba6d2 100644
> --- a/fs/btrfs/verity.c
> +++ b/fs/btrfs/verity.c
> @@ -742,7 +742,7 @@ static struct page *btrfs_read_merkle_tree_page(struct inode *inode,
> }
>
> folio = filemap_alloc_folio(mapping_gfp_constraint(inode->i_mapping, ~__GFP_FS),
> - 0);
> + 0, NULL);
> if (!folio)
> return ERR_PTR(-ENOMEM);
>
> diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
> index fe8071844724..00e9160a0d24 100644
> --- a/fs/erofs/zdata.c
> +++ b/fs/erofs/zdata.c
> @@ -562,7 +562,7 @@ static void z_erofs_bind_cache(struct z_erofs_frontend *fe)
> * Allocate a managed folio for cached I/O, or it may be
> * then filled with a file-backed folio for in-place I/O
> */
> - newfolio = filemap_alloc_folio(gfp, 0);
> + newfolio = filemap_alloc_folio(gfp, 0, NULL);
> if (!newfolio)
> continue;
> newfolio->private = Z_EROFS_PREALLOCATED_FOLIO;
> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
> index b3c1df93a163..7ef937dd7624 100644
> --- a/fs/f2fs/compress.c
> +++ b/fs/f2fs/compress.c
> @@ -1942,7 +1942,7 @@ void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
> return;
> }
>
> - cfolio = filemap_alloc_folio(__GFP_NOWARN | __GFP_IO, 0);
> + cfolio = filemap_alloc_folio(__GFP_NOWARN | __GFP_IO, 0, NULL);
> if (!cfolio)
> return;
>
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index e63fbfbd5b0f..c176aeeb38db 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -646,9 +646,9 @@ static inline void *detach_page_private(struct page *page)
> }
>
> #ifdef CONFIG_NUMA
> -struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order);
> +struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order, struct mempolicy *policy);
> #else
> -static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order)
> +static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order, struct mempolicy *policy)
> {
> return folio_alloc_noprof(gfp, order);
> }
> @@ -659,7 +659,7 @@ static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int o
>
> static inline struct page *__page_cache_alloc(gfp_t gfp)
> {
> - return &filemap_alloc_folio(gfp, 0)->page;
> + return &filemap_alloc_folio(gfp, 0, NULL)->page;
> }
>
> static inline gfp_t readahead_gfp_mask(struct address_space *x)
> diff --git a/mm/filemap.c b/mm/filemap.c
> index bada249b9fb7..a26df313207d 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -989,11 +989,16 @@ int filemap_add_folio(struct address_space *mapping, struct folio *folio,
> EXPORT_SYMBOL_GPL(filemap_add_folio);
>
> #ifdef CONFIG_NUMA
> -struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order)
> +struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order,
> + struct mempolicy *policy)
> {
> int n;
> struct folio *folio;
>
> + if (policy)
> + return folio_alloc_mpol_noprof(gfp, order, policy,
> + NO_INTERLEAVE_INDEX, numa_node_id());
> +
> if (cpuset_do_page_mem_spread()) {
> unsigned int cpuset_mems_cookie;
> do {
> @@ -1977,7 +1982,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> err = -ENOMEM;
> if (order > min_order)
> alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
> - folio = filemap_alloc_folio(alloc_gfp, order);
> + folio = filemap_alloc_folio(alloc_gfp, order, NULL);
> if (!folio)
> continue;
>
> @@ -2516,7 +2521,7 @@ static int filemap_create_folio(struct kiocb *iocb, struct folio_batch *fbatch)
> if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ))
> return -EAGAIN;
>
> - folio = filemap_alloc_folio(mapping_gfp_mask(mapping), min_order);
> + folio = filemap_alloc_folio(mapping_gfp_mask(mapping), min_order, NULL);
> if (!folio)
> return -ENOMEM;
> if (iocb->ki_flags & IOCB_DONTCACHE)
> @@ -3854,7 +3859,7 @@ static struct folio *do_read_cache_folio(struct address_space *mapping,
> folio = filemap_get_folio(mapping, index);
> if (IS_ERR(folio)) {
> folio = filemap_alloc_folio(gfp,
> - mapping_min_folio_order(mapping));
> + mapping_min_folio_order(mapping), NULL);
> if (!folio)
> return ERR_PTR(-ENOMEM);
> index = mapping_align_index(mapping, index);
> diff --git a/mm/readahead.c b/mm/readahead.c
> index 20d36d6b055e..0b2aec0231e6 100644
> --- a/mm/readahead.c
> +++ b/mm/readahead.c
> @@ -183,7 +183,7 @@ static struct folio *ractl_alloc_folio(struct readahead_control *ractl,
> {
> struct folio *folio;
>
> - folio = filemap_alloc_folio(gfp_mask, order);
> + folio = filemap_alloc_folio(gfp_mask, order, NULL);
> if (folio && ractl->dropbehind)
> __folio_set_dropbehind(folio);
>
next prev parent reply other threads:[~2025-06-23 6:13 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-18 11:29 [RFC PATCH v8 0/7] Add NUMA mempolicy support for KVM guest-memfd Shivank Garg
2025-06-18 11:29 ` [RFC PATCH v8 1/7] security: Export anon_inode_make_secure_inode for KVM guest_memfd Shivank Garg
2025-06-18 11:29 ` [RFC PATCH v8 2/7] KVM: guest_memfd: Use guest mem inodes instead of anonymous inodes Shivank Garg
2025-06-18 11:29 ` [RFC PATCH v8 3/7] mm/filemap: Add mempolicy support to the filemap layer Shivank Garg
2025-06-19 15:08 ` Vlastimil Babka
2025-06-19 16:03 ` Matthew Wilcox
2025-06-20 5:59 ` Shivank Garg
2025-06-20 9:37 ` Vlastimil Babka
2025-06-20 14:34 ` Matthew Wilcox
2025-06-20 14:52 ` Shivank Garg
2025-06-20 14:58 ` Matthew Wilcox
2025-06-20 14:34 ` [PATCH 1/2] filemap: Add a mempolicy argument to filemap_alloc_folio() Matthew Wilcox (Oracle)
2025-06-23 6:13 ` Gupta, Pankaj [this message]
2025-06-23 7:19 ` Vlastimil Babka
2025-06-20 14:34 ` [PATCH 2/2] filemap: Add __filemap_get_folio_mpol() Matthew Wilcox (Oracle)
2025-06-20 16:53 ` Matthew Wilcox
2025-06-22 18:43 ` Andrew Morton
2025-06-22 19:02 ` Shivank Garg
2025-06-22 22:16 ` Andrew Morton
2025-06-23 4:18 ` Shivank Garg
2025-06-23 10:01 ` Shivank Garg
2025-06-23 7:16 ` Vlastimil Babka
2025-06-23 9:56 ` Shivank Garg
2025-06-23 6:15 ` Gupta, Pankaj
2025-06-23 7:20 ` Vlastimil Babka
2025-06-18 11:29 ` [RFC PATCH v8 4/7] mm/mempolicy: Export memory policy symbols Shivank Garg
2025-06-18 15:12 ` Gregory Price
2025-06-19 11:13 ` Shivank Garg
2025-06-19 16:28 ` Vlastimil Babka
2025-06-18 11:29 ` [RFC PATCH v8 5/7] KVM: guest_memfd: Add slab-allocated inode cache Shivank Garg
2025-06-24 4:16 ` Huang, Ying
2025-06-29 18:25 ` Shivank Garg
2025-06-18 11:29 ` [RFC PATCH v8 6/7] KVM: guest_memfd: Enforce NUMA mempolicy using shared policy Shivank Garg
2025-06-18 11:29 ` [RFC PATCH v8 7/7] KVM: guest_memfd: selftests: Add tests for mmap and NUMA policy support Shivank Garg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9ec4c72e-8895-42d3-9b3c-0794fc9c5c34@amd.com \
--to=pankaj.gupta@amd.com \
--cc=Neeraj.Upadhyay@amd.com \
--cc=ackerleytng@google.com \
--cc=aik@amd.com \
--cc=akpm@linux-foundation.org \
--cc=amit@infradead.org \
--cc=aneeshkumar.kizhakeveetil@arm.com \
--cc=apopple@nvidia.com \
--cc=ashish.kalra@amd.com \
--cc=bfoster@redhat.com \
--cc=bharata@amd.com \
--cc=brauner@kernel.org \
--cc=byungchul@sk.com \
--cc=cgzones@googlemail.com \
--cc=chao.gao@intel.com \
--cc=chao.p.peng@intel.com \
--cc=dan.j.williams@intel.com \
--cc=david@redhat.com \
--cc=ddutile@redhat.com \
--cc=gourry@gourry.net \
--cc=gshan@redhat.com \
--cc=hch@infradead.org \
--cc=ira.weiny@intel.com \
--cc=jack@suse.cz \
--cc=jgg@nvidia.com \
--cc=jgowans@amazon.com \
--cc=jmorris@namei.org \
--cc=joshua.hahnjy@gmail.com \
--cc=kalyazin@amazon.com \
--cc=kent.overstreet@linux.dev \
--cc=kvm@vger.kernel.org \
--cc=linux-coco@lists.linux.dev \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-security-module@vger.kernel.org \
--cc=matthew.brost@intel.com \
--cc=michael.day@amd.com \
--cc=michael.roth@amd.com \
--cc=nikunj@amd.com \
--cc=papaluri@amd.com \
--cc=paul@paul-moore.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=pvorel@suse.cz \
--cc=quic_eberman@quicinc.com \
--cc=rakie.kim@sk.com \
--cc=rientjes@google.com \
--cc=roypat@amazon.co.uk \
--cc=rppt@kernel.org \
--cc=seanjc@google.com \
--cc=serge@hallyn.com \
--cc=shivankg@amd.com \
--cc=shuah@kernel.org \
--cc=suzuki.poulose@arm.com \
--cc=tabba@google.com \
--cc=thomas.lendacky@amd.com \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=viro@zeniv.linux.org.uk \
--cc=willy@infradead.org \
--cc=yan.y.zhao@intel.com \
--cc=ying.huang@linux.alibaba.com \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox