From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: netdev@vger.kernel.org, linux-mm@kvack.org,
"Toke Høiland-Jørgensen" <toke@toke.dk>,
"Ilias Apalodimas" <ilias.apalodimas@linaro.org>,
"Saeed Mahameed" <saeedm@mellanox.com>,
"Andrew Morton" <akpm@linux-foundation.org>,
mgorman@techsingularity.net,
"David S. Miller" <davem@davemloft.net>,
"Tariq Toukan" <tariqt@mellanox.com>,
brouer@redhat.com,
"Willem de Bruijn" <willemdebruijn.kernel@gmail.com>
Subject: Re: [net-next PATCH 1/2] mm: add dma_addr_t to struct page
Date: Tue, 12 Feb 2019 11:06:20 +0100 [thread overview]
Message-ID: <20190212110620.5ceb5366@carbon> (raw)
In-Reply-To: <20190211165551.GD12668@bombadil.infradead.org>
On Mon, 11 Feb 2019 08:55:51 -0800
Matthew Wilcox <willy@infradead.org> wrote:
> On Mon, Feb 11, 2019 at 05:06:46PM +0100, Jesper Dangaard Brouer wrote:
> > The page_pool API is using page->private to store DMA addresses.
> > As pointed out by David Miller we can't use that on 32-bit architectures
> > with 64-bit DMA
> >
> > This patch adds a new dma_addr_t struct to allow storing DMA addresses
> >
> > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> > Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
>
> Reviewed-by: Matthew Wilcox <willy@infradead.org>
>
> > + struct { /* page_pool used by netstack */
> > + /**
> > + * @dma_addr: Page_pool need to store DMA-addr, and
>
> s/need/needs/
>
> > + * cannot use @private, as DMA-mappings can be 64-bit
>
> s/DMA-mappings/DMA addresses/
>
> > + * even on 32-bit Architectures.
>
> s/A/a/
Yes, that comments needs improvement. I think I'll use AKPMs suggestion.
> > + */
> > + dma_addr_t dma_addr; /* Shares area with @lru */
>
> It also shares with @slab_list, @next, @compound_head, @pgmap and
> @rcu_head. I think it's pointless to try to document which other fields
> something shares space with; the places which do it are a legacy from
> before I rearranged struct page last year. Anyone looking at this should
> now be able to see "Oh, this is a union, only use the fields which are
> in the union for the type of struct page I have here".
I agree, I'll strip that comment.
> Are the pages allocated from this API ever supposed to be mapped to
> userspace?
I would like to know what fields on struct-page we cannot touch if we
want to keep this a possibility?
That said, I hope we don't need to do this. But as I integrate this
further into the netstack code, we might have to support this, or
at-least release the page_pool "state" (currently only DMA-addr) before
the skb_zcopy code path. First iteration will not do zero-copy stuff,
and later I'll coordinate with Willem how to add this, if needed.
My general opinion is that if an end-user want to have pages mapped to
userspace, then page_pool (MEM_TYPE_PAGE_POOL) is not the right choice,
but instead use MEM_TYPE_ZERO_COPY (see enum xdp_mem_type). We are
generally working towards allowing NIC drivers to have a different
memory type per RX-ring.
> You also say in the documentation:
>
> * If no DMA mapping is done, then it can act as shim-layer that
> * fall-through to alloc_page. As no state is kept on the page, the
> * regular put_page() call is sufficient.
>
> I think this is probably a dangerous precedent to set. Better to require
> exactly one call to page_pool_put_page() (with the understanding that the
> refcount may be elevated, so this may not be the final free of the page,
> but the page will no longer be usable for its page_pool purpose).
Yes, this actually how it is implemented today, and the comment should
be improved. Today __page_pool_put_page() in case of refcount is
elevated do call __page_pool_clean_page() to release page page_pool
state, and is in principle no longer "usable" for page_pool purposes.
BUT I have considered removing this, as it might not fit how want to
use the API. In our current RFC we found a need for (and introduced) a
page_pool_unmap_page() call (that call __page_pool_clean_page()), when
driver hits cases where the code path doesn't have a call-back to
page_pool_put_page() but instead end-up calling put_page().
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
next prev parent reply other threads:[~2019-02-12 10:06 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-11 16:06 [net-next PATCH 0/2] Fix page_pool API and dma address storage Jesper Dangaard Brouer
2019-02-11 16:06 ` [net-next PATCH 1/2] mm: add dma_addr_t to struct page Jesper Dangaard Brouer
2019-02-11 16:55 ` Matthew Wilcox
2019-02-12 10:06 ` Jesper Dangaard Brouer [this message]
2019-02-11 20:16 ` Andrew Morton
2019-02-12 8:28 ` Jesper Dangaard Brouer
2019-02-11 16:06 ` [net-next PATCH 2/2] net: page_pool: don't use page->private to store dma_addr_t Jesper Dangaard Brouer
2019-02-11 19:31 ` Alexander Duyck
2019-02-12 8:23 ` Jesper Dangaard Brouer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190212110620.5ceb5366@carbon \
--to=brouer@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=davem@davemloft.net \
--cc=ilias.apalodimas@linaro.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=netdev@vger.kernel.org \
--cc=saeedm@mellanox.com \
--cc=tariqt@mellanox.com \
--cc=toke@toke.dk \
--cc=willemdebruijn.kernel@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox