linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <hawk@kernel.org>
To: "Toke Høiland-Jørgensen" <toke@redhat.com>,
	"David S. Miller" <davem@davemloft.net>,
	"Jakub Kicinski" <kuba@kernel.org>,
	"Saeed Mahameed" <saeedm@nvidia.com>,
	"Leon Romanovsky" <leon@kernel.org>,
	"Tariq Toukan" <tariqt@nvidia.com>,
	"Andrew Lunn" <andrew+netdev@lunn.ch>,
	"Eric Dumazet" <edumazet@google.com>,
	"Paolo Abeni" <pabeni@redhat.com>,
	"Ilias Apalodimas" <ilias.apalodimas@linaro.org>,
	"Simon Horman" <horms@kernel.org>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	"Mina Almasry" <almasrymina@google.com>,
	"Yonglong Liu" <liuyonglong@huawei.com>,
	"Yunsheng Lin" <linyunsheng@huawei.com>,
	"Pavel Begunkov" <asml.silence@gmail.com>,
	"Matthew Wilcox" <willy@infradead.org>
Cc: netdev@vger.kernel.org, bpf@vger.kernel.org,
	linux-rdma@vger.kernel.org, linux-mm@kvack.org,
	Qiuling Ren <qren@redhat.com>, Yuying Ma <yuma@redhat.com>
Subject: Re: [PATCH net-next v2 3/3] page_pool: Track DMA-mapped pages and unmap them when destroying the pool
Date: Wed, 26 Mar 2025 14:54:26 +0100	[thread overview]
Message-ID: <e9e0affd-683d-418b-9618-4b1a69095342@kernel.org> (raw)
In-Reply-To: <20250325-page-pool-track-dma-v2-3-113ebc1946f3@redhat.com>



On 25/03/2025 16.45, Toke Høiland-Jørgensen wrote:
> When enabling DMA mapping in page_pool, pages are kept DMA mapped until
> they are released from the pool, to avoid the overhead of re-mapping the
> pages every time they are used. This causes resource leaks and/or
> crashes when there are pages still outstanding while the device is torn
> down, because page_pool will attempt an unmap through a non-existent DMA
> device on the subsequent page return.
> 
> To fix this, implement a simple tracking of outstanding DMA-mapped pages
> in page pool using an xarray. This was first suggested by Mina[0], and
> turns out to be fairly straight forward: We simply store pointers to
> pages directly in the xarray with xa_alloc() when they are first DMA
> mapped, and remove them from the array on unmap. Then, when a page pool
> is torn down, it can simply walk the xarray and unmap all pages still
> present there before returning, which also allows us to get rid of the
> get/put_device() calls in page_pool. Using xa_cmpxchg(), no additional
> synchronisation is needed, as a page will only ever be unmapped once.
> 
> To avoid having to walk the entire xarray on unmap to find the page
> reference, we stash the ID assigned by xa_alloc() into the page
> structure itself, using the upper bits of the pp_magic field. This
> requires a couple of defines to avoid conflicting with the
> POINTER_POISON_DELTA define, but this is all evaluated at compile-time,
> so does not affect run-time performance. The bitmap calculations in this
> patch gives the following number of bits for different architectures:
> 
> - 23 bits on 32-bit architectures
> - 21 bits on PPC64 (because of the definition of ILLEGAL_POINTER_VALUE)
> - 32 bits on other 64-bit architectures
> 
> Stashing a value into the unused bits of pp_magic does have the effect
> that it can make the value stored there lie outside the unmappable
> range (as governed by the mmap_min_addr sysctl), for architectures that
> don't define ILLEGAL_POINTER_VALUE. This means that if one of the
> pointers that is aliased to the pp_magic field (such as page->lru.next)
> is dereferenced while the page is owned by page_pool, that could lead to
> a dereference into userspace, which is a security concern. The risk of
> this is mitigated by the fact that (a) we always clear pp_magic before
> releasing a page from page_pool, and (b) this would need a
> use-after-free bug for struct page, which can have many other risks
> since page->lru.next is used as a generic list pointer in multiple
> places in the kernel. As such, with this patch we take the position that
> this risk is negligible in practice. For more discussion, see[1].
> 
> Since all the tracking added in this patch is performed on DMA
> map/unmap, no additional code is needed in the fast path, meaning the
> performance overhead of this tracking is negligible there. A
> micro-benchmark shows that the total overhead of the tracking itself is
> about 400 ns (39 cycles(tsc) 395.218 ns; sum for both map and unmap[2]).
> Since this cost is only paid on DMA map and unmap, it seems like an
> acceptable cost to fix the late unmap issue. Further optimisation can
> narrow the cases where this cost is paid (for instance by eliding the
> tracking when DMA map/unmap is a no-op).
> 
> The extra memory needed to track the pages is neatly encapsulated inside
> xarray, which uses the 'struct xa_node' structure to track items. This
> structure is 576 bytes long, with slots for 64 items, meaning that a
> full node occurs only 9 bytes of overhead per slot it tracks (in
> practice, it probably won't be this efficient, but in any case it should
> be an acceptable overhead).
> 
> [0]https://lore.kernel.org/all/CAHS8izPg7B5DwKfSuzz-iOop_YRbk3Sd6Y4rX7KBG9DcVJcyWg@mail.gmail.com/
> [1]https://lore.kernel.org/r/20250320023202.GA25514@openwall.com
> [2]https://lore.kernel.org/r/ae07144c-9295-4c9d-a400-153bb689fe9e@huawei.com
> 
> Reported-by: Yonglong Liu<liuyonglong@huawei.com>
> Closes:https://lore.kernel.org/r/8743264a-9700-4227-a556-5f931c720211@huawei.com
> Fixes: ff7d6b27f894 ("page_pool: refurbish version of page_pool code")
> Suggested-by: Mina Almasry<almasrymina@google.com>
> Reviewed-by: Mina Almasry<almasrymina@google.com>
> Reviewed-by: Jesper Dangaard Brouer<hawk@kernel.org>
> Tested-by: Jesper Dangaard Brouer<hawk@kernel.org>
> Tested-by: Qiuling Ren<qren@redhat.com>
> Tested-by: Yuying Ma<yuma@redhat.com>
> Tested-by: Yonglong Liu<liuyonglong@huawei.com>
> Signed-off-by: Toke Høiland-Jørgensen<toke@redhat.com>
> ---
>   include/linux/poison.h        |  4 +++
>   include/net/page_pool/types.h | 49 +++++++++++++++++++++++---
>   net/core/netmem_priv.h        | 28 ++++++++++++++-
>   net/core/page_pool.c          | 82 ++++++++++++++++++++++++++++++++++++-------
>   4 files changed, 145 insertions(+), 18 deletions(-)


Acked-by: Jesper Dangaard Brouer <hawk@kernel.org>



  reply	other threads:[~2025-03-26 13:54 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-25 15:45 [PATCH net-next v2 0/3] Fix late DMA unmap crash for page pool Toke Høiland-Jørgensen
2025-03-25 15:45 ` [PATCH net-next v2 1/3] page_pool: Move pp_magic check into helper functions Toke Høiland-Jørgensen
2025-03-25 15:45 ` [PATCH net-next v2 2/3] page_pool: Turn dma_sync and dma_sync_cpu fields into a bitmap Toke Høiland-Jørgensen
2025-03-25 22:17   ` Jakub Kicinski
2025-03-26  8:12     ` Toke Høiland-Jørgensen
2025-03-26 11:23       ` Jakub Kicinski
2025-03-26 11:29         ` Jakub Kicinski
2025-03-26 18:00   ` Saeed Mahameed
2025-03-25 15:45 ` [PATCH net-next v2 3/3] page_pool: Track DMA-mapped pages and unmap them when destroying the pool Toke Høiland-Jørgensen
2025-03-26 13:54   ` Jesper Dangaard Brouer [this message]
2025-03-26 18:22   ` Saeed Mahameed
2025-03-26 20:02     ` Mina Almasry
2025-03-27  0:29       ` Saeed Mahameed
2025-03-27  1:37         ` Mina Almasry
2025-03-27  3:53     ` Yunsheng Lin
2025-03-27  4:59       ` Mina Almasry
2025-03-27  7:21         ` Yunsheng Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e9e0affd-683d-418b-9618-4b1a69095342@kernel.org \
    --to=hawk@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=almasrymina@google.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=asml.silence@gmail.com \
    --cc=bpf@vger.kernel.org \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=horms@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=kuba@kernel.org \
    --cc=leon@kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=linyunsheng@huawei.com \
    --cc=liuyonglong@huawei.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=qren@redhat.com \
    --cc=saeedm@nvidia.com \
    --cc=tariqt@nvidia.com \
    --cc=toke@redhat.com \
    --cc=willy@infradead.org \
    --cc=yuma@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox