linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@infradead.org>
To: Keith Busch <kbusch@meta.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Matthew Wilcox <willy@infradead.org>,
	Tony Battersby <tonyb@cybernetics.com>,
	Kernel Team <kernel-team@meta.com>,
	Keith Busch <kbusch@kernel.org>
Subject: Re: [PATCHv2 11/11] dmapool: link blocks across pages
Date: Fri, 23 Dec 2022 08:58:15 -0800	[thread overview]
Message-ID: <Y6XeJ2mzd8p73J93@infradead.org> (raw)
In-Reply-To: <20221216201625.2362737-12-kbusch@meta.com>

On Fri, Dec 16, 2022 at 12:16:25PM -0800, Keith Busch wrote:
>  	unsigned int size;
>  	unsigned int allocation;
>  	unsigned int boundary;
> +	size_t nr_blocks;
> +	size_t nr_active;
> +	size_t nr_pages;

Should these be unsigned int like the counters above?

> +static inline struct dma_block *pool_block_pop(struct dma_pool *pool)
> +{
> +	struct dma_block *block = pool->next_block;
> +
> +	if (block) {
> +		pool->next_block = block->next_block;
> +		pool->nr_active++;
> +	}
> +	return block;
> +}
> +
> +static inline void pool_block_push(struct dma_pool *pool, struct dma_block *block,
> +				 dma_addr_t dma)
> +{
> +	block->dma = dma;
> +	block->next_block = pool->next_block;
> +	pool->next_block = block;
> +}

Any point in marking these inline vs just letting the ocmpile do
it's job?

> @@ -162,6 +176,10 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev,
>  	retval->size = size;
>  	retval->boundary = boundary;
>  	retval->allocation = allocation;
> +	retval->nr_blocks = 0;
> +	retval->nr_active = 0;
> +	retval->nr_pages = 0;
> +	retval->next_block = NULL;

Maybe just switch to kzmalloc so that you don't have to bother
initializing invdividual fields.  It's not like dma_pool_create is
called from anything near a fast path.

>  static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
>  {
> +	unsigned int next_boundary = pool->boundary, offset = 0;
> +	struct dma_block *block;
> +
> +	while (offset + pool->size <= pool->allocation) {
> +		if (offset + pool->size > next_boundary) {
> +			offset = next_boundary;
>  			next_boundary += pool->boundary;
> +			continue;
>  		}
> +
> +		block = page->vaddr + offset;
> +		pool_block_push(pool, block, page->dma + offset);

So I guess with this pool_initialise_page needs to be called under
the lock anyway, but just doing it silently in the previous patch
seems a bit odd.

> +static inline void pool_check_block(struct dma_pool *pool, struct dma_block *block,
> +				    gfp_t mem_flags)

I didn't spot this earlier, but inline on a relatively expensive debug
helper is a bit silly.

Otherwise this looks like a nice improvement by using a better and
simpler data structure.


  parent reply	other threads:[~2022-12-23 16:58 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-16 20:16 [PATCHv2 00/11] dmapool enhancements Keith Busch
2022-12-16 20:16 ` [PATCHv2 01/11] dmapool: add alloc/free performance test Keith Busch
2022-12-16 20:16 ` [PATCHv2 02/11] dmapool: remove checks for dev == NULL Keith Busch
2022-12-23 16:27   ` Christoph Hellwig
2022-12-16 20:16 ` [PATCHv2 03/11] dmapool: use sysfs_emit() instead of scnprintf() Keith Busch
2022-12-23 16:28   ` Christoph Hellwig
2022-12-16 20:16 ` [PATCHv2 04/11] dmapool: cleanup integer types Keith Busch
2022-12-23 16:29   ` Christoph Hellwig
2022-12-16 20:16 ` [PATCHv2 05/11] dmapool: speedup DMAPOOL_DEBUG with init_on_alloc Keith Busch
2022-12-23 16:29   ` Christoph Hellwig
2022-12-16 20:16 ` [PATCHv2 06/11] dmapool: move debug code to own functions Keith Busch
2022-12-23 16:31   ` Christoph Hellwig
2022-12-16 20:16 ` [PATCHv2 07/11] dmapool: rearrange page alloc failure handling Keith Busch
2022-12-23 16:31   ` Christoph Hellwig
2022-12-16 20:16 ` [PATCHv2 08/11] dmapool: consolidate page initialization Keith Busch
2022-12-23 16:35   ` Christoph Hellwig
2022-12-16 20:16 ` [PATCHv2 09/11] dmapool: simplify freeing Keith Busch
2022-12-23 16:38   ` Christoph Hellwig
2022-12-27 20:21     ` Keith Busch
2022-12-16 20:16 ` [PATCHv2 10/11] dmapool: don't memset on free twice Keith Busch
2022-12-23 16:39   ` Christoph Hellwig
2022-12-16 20:16 ` [PATCHv2 11/11] dmapool: link blocks across pages Keith Busch
2022-12-17  2:39   ` kernel test robot
2022-12-17  3:49   ` kernel test robot
2022-12-23 16:58   ` Christoph Hellwig [this message]
2022-12-23 17:08     ` Tony Battersby
2022-12-23 17:15       ` Christoph Hellwig
2022-12-24 14:55         ` Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y6XeJ2mzd8p73J93@infradead.org \
    --to=hch@infradead.org \
    --cc=kbusch@kernel.org \
    --cc=kbusch@meta.com \
    --cc=kernel-team@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=tonyb@cybernetics.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox