From: Keith Busch <kbusch@meta.com>
To: <linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>,
<willy@infradead.org>, <hch@lst.de>, <tonyb@cybernetics.com>,
<akpm@linux-foundation.org>
Cc: <kernel-team@meta.com>, Keith Busch <kbusch@kernel.org>
Subject: [PATCHv4 07/12] dmapool: rearrange page alloc failure handling
Date: Thu, 26 Jan 2023 13:51:20 -0800 [thread overview]
Message-ID: <20230126215125.4069751-8-kbusch@meta.com> (raw)
In-Reply-To: <20230126215125.4069751-1-kbusch@meta.com>
From: Keith Busch <kbusch@kernel.org>
Handle the error in a condition so the good path can be in the normal
flow.
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
mm/dmapool.c | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index 30b069e999968..900f2afa363a9 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -292,17 +292,19 @@ static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags)
page = kmalloc(sizeof(*page), mem_flags);
if (!page)
return NULL;
+
page->vaddr = dma_alloc_coherent(pool->dev, pool->allocation,
&page->dma, mem_flags);
- if (page->vaddr) {
- pool_init_page(pool, page);
- pool_initialise_page(pool, page);
- page->in_use = 0;
- page->offset = 0;
- } else {
+ if (!page->vaddr) {
kfree(page);
- page = NULL;
+ return NULL;
}
+
+ pool_init_page(pool, page);
+ pool_initialise_page(pool, page);
+ page->in_use = 0;
+ page->offset = 0;
+
return page;
}
--
2.30.2
next prev parent reply other threads:[~2023-01-26 21:54 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-26 21:51 [PATCHv4 00/12] dmapool enhancements Keith Busch
2023-01-26 21:51 ` [PATCHv4 01/12] dmapool: add alloc/free performance test Keith Busch
2023-01-26 21:51 ` [PATCHv4 02/12] dmapool: remove checks for dev == NULL Keith Busch
2023-01-26 21:51 ` [PATCHv4 03/12] dmapool: use sysfs_emit() instead of scnprintf() Keith Busch
2023-01-26 21:51 ` [PATCHv4 04/12] dmapool: cleanup integer types Keith Busch
2023-01-26 21:51 ` [PATCHv4 05/12] dmapool: speedup DMAPOOL_DEBUG with init_on_alloc Keith Busch
2023-01-26 21:51 ` [PATCHv4 06/12] dmapool: move debug code to own functions Keith Busch
2023-01-26 21:51 ` Keith Busch [this message]
2023-01-26 21:51 ` [PATCHv4 08/12] dmapool: consolidate page initialization Keith Busch
2023-01-26 21:51 ` [PATCHv4 09/12] dmapool: simplify freeing Keith Busch
2023-01-26 21:51 ` [PATCHv4 10/12] dmapool: don't memset on free twice Keith Busch
2023-01-26 21:51 ` [PATCHv4 11/12] dmapool: link blocks across pages Keith Busch
2023-02-01 17:42 ` Bryan O'Donoghue
2023-02-01 17:43 ` Keith Busch
2023-02-02 0:38 ` Bryan O'Donoghue
2023-02-27 0:54 ` Guenter Roeck
2023-02-28 1:01 ` Keith Busch
2023-02-28 2:18 ` Guenter Roeck
2023-01-26 21:51 ` [PATCHv4 12/12] dmapool: create/destroy cleanup Keith Busch
2023-01-26 22:22 ` [PATCHv4 00/12] dmapool enhancements Andrew Morton
2023-01-27 0:27 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230126215125.4069751-8-kbusch@meta.com \
--to=kbusch@meta.com \
--cc=akpm@linux-foundation.org \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=kernel-team@meta.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=tonyb@cybernetics.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox