From: Vlastimil Babka <vbabka@suse.cz>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Christoph Lameter <cl@gentwo.org>,
David Rientjes <rientjes@google.com>,
Roman Gushchin <roman.gushchin@linux.dev>,
Harry Yoo <harry.yoo@oracle.com>,
"Martin K. Petersen" <martin.petersen@oracle.com>,
linux-block@vger.kernel.org, linux-mm@kvack.org,
David Hildenbrand <david@redhat.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>,
Brendan Jackman <jackmanb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>, Zi Yan <ziy@nvidia.com>
Subject: Re: [PATCH 1/3] slab, block: generalize bvec_alloc_gfp
Date: Fri, 24 Oct 2025 10:38:20 +0200 [thread overview]
Message-ID: <50e96fd8-114b-4de3-939e-9ba606e64b06@suse.cz> (raw)
In-Reply-To: <20251023080919.9209-2-hch@lst.de>
On 10/23/25 10:08, Christoph Hellwig wrote:
> bvec_alloc_gfp is useful for any place that tries to kmalloc first and
> then fall back to a mempool. Rename it and move it to blk.h to prepare
I wonder if such fall backs are necessary because IIRC mempools try to
allocate from the underlying provider (i.e. kmalloc caches first), and only
give out the reserves when that fails. Is it done for less overhead or
something?
> for using it to allocate the default integrity buffer.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
That says blk.h but you move it to slab.h? Assuming you intended slab.h.
However gfp flags are not slab only so it should be rather
include/linux/gfp.h - added maintainers of that to Cc.
We do have gfp_nested_mask() there which is quite similar but not exactly.
Maybe a canonical macro not for nested, but for opportunistic allocations
(if a one size fits all solution can be found) would be useful too, as
people indeed reinvent those manually in various places with subtle differences.
> ---
> block/bio.c | 13 ++-----------
> include/linux/slab.h | 10 ++++++++++
> 2 files changed, 12 insertions(+), 11 deletions(-)
>
> diff --git a/block/bio.c b/block/bio.c
> index b3a79285c278..4ea5833a7637 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -169,16 +169,6 @@ void bvec_free(mempool_t *pool, struct bio_vec *bv, unsigned short nr_vecs)
> kmem_cache_free(biovec_slab(nr_vecs)->slab, bv);
> }
>
> -/*
> - * Make the first allocation restricted and don't dump info on allocation
> - * failures, since we'll fall back to the mempool in case of failure.
> - */
> -static inline gfp_t bvec_alloc_gfp(gfp_t gfp)
> -{
> - return (gfp & ~(__GFP_DIRECT_RECLAIM | __GFP_IO)) |
> - __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN;
> -}
> -
> struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs,
> gfp_t gfp_mask)
> {
> @@ -201,7 +191,8 @@ struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs,
> if (*nr_vecs < BIO_MAX_VECS) {
> struct bio_vec *bvl;
>
> - bvl = kmem_cache_alloc(bvs->slab, bvec_alloc_gfp(gfp_mask));
> + bvl = kmem_cache_alloc(bvs->slab,
> + try_alloc_gfp(gfp_mask & ~__GFP_IO));
> if (likely(bvl) || !(gfp_mask & __GFP_DIRECT_RECLAIM))
> return bvl;
> *nr_vecs = BIO_MAX_VECS;
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index d5a8ab98035c..a6672cead03e 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -1113,6 +1113,16 @@ void kfree_rcu_scheduler_running(void);
> */
> size_t kmalloc_size_roundup(size_t size);
>
> +/*
> + * Make the first allocation restricted and don't dump info on allocation
> + * failures, for callers that will fall back to a mempool in case of failure.
> + */
> +static inline gfp_t try_alloc_gfp(gfp_t gfp)
> +{
> + return (gfp & ~__GFP_DIRECT_RECLAIM) |
> + __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN;
Note without __GFP_DIRECT_RECLAIM the __GFP_NORETRY is most likely redundant
(but doesn't hurt).
> +}
> +
> void __init kmem_cache_init_late(void);
> void __init kvfree_rcu_init(void);
>
next prev parent reply other threads:[~2025-10-24 8:38 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-23 8:08 make block layer auto-PI deadlock safe Christoph Hellwig
2025-10-23 8:08 ` [PATCH 1/3] slab, block: generalize bvec_alloc_gfp Christoph Hellwig
2025-10-24 1:44 ` Martin K. Petersen
2025-10-24 8:38 ` Vlastimil Babka [this message]
2025-10-24 9:05 ` Christoph Hellwig
2025-10-26 21:19 ` Matthew Wilcox
2025-10-27 6:47 ` Christoph Hellwig
2025-10-27 13:09 ` Matthew Wilcox
2025-10-27 13:14 ` Christoph Hellwig
2025-10-23 8:08 ` [PATCH 2/3] block: blocking mempool_alloc doesn't fail Christoph Hellwig
2025-10-24 1:45 ` Martin K. Petersen
2025-10-23 8:08 ` [PATCH 3/3] block: make bio auto-integrity deadlock safe Christoph Hellwig
2025-10-24 1:47 ` Martin K. Petersen
2025-10-27 6:03 ` Kanchan Joshi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50e96fd8-114b-4de3-939e-9ba606e64b06@suse.cz \
--to=vbabka@suse.cz \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=cl@gentwo.org \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=harry.yoo@oracle.com \
--cc=hch@lst.de \
--cc=jackmanb@google.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=martin.petersen@oracle.com \
--cc=mhocko@suse.com \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=rppt@kernel.org \
--cc=surenb@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox