From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: Paolo Abeni <pabeni@redhat.com>
Cc: Yunsheng Lin <linyunsheng@huawei.com>,
davem@davemloft.net, kuba@kernel.org, liuyonglong@huawei.com,
fanghaiqing@huawei.com, zhangkun09@huawei.com,
Robin Murphy <robin.murphy@arm.com>,
Alexander Duyck <alexander.duyck@gmail.com>,
IOMMU <iommu@lists.linux.dev>, Wei Fang <wei.fang@nxp.com>,
Shenwei Wang <shenwei.wang@nxp.com>,
Clark Wang <xiaoning.wang@nxp.com>,
Eric Dumazet <edumazet@google.com>,
Tony Nguyen <anthony.l.nguyen@intel.com>,
Przemek Kitszel <przemyslaw.kitszel@intel.com>,
Alexander Lobakin <aleksander.lobakin@intel.com>,
Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Jesper Dangaard Brouer <hawk@kernel.org>,
John Fastabend <john.fastabend@gmail.com>,
Saeed Mahameed <saeedm@nvidia.com>,
Leon Romanovsky <leon@kernel.org>,
Tariq Toukan <tariqt@nvidia.com>, Felix Fietkau <nbd@nbd.name>,
Lorenzo Bianconi <lorenzo@kernel.org>,
Ryder Lee <ryder.lee@mediatek.com>,
Shayne Chen <shayne.chen@mediatek.com>,
Sean Wang <sean.wang@mediatek.com>,
Kalle Valo <kvalo@kernel.org>,
Matthias Brugger <matthias.bgg@gmail.com>,
AngeloGioacchino Del Regno
<angelogioacchino.delregno@collabora.com>,
Andrew Morton <akpm@linux-foundation.org>,
imx@lists.linux.dev, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org, intel-wired-lan@lists.osuosl.org,
bpf@vger.kernel.org, linux-rdma@vger.kernel.org,
linux-wireless@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-mediatek@lists.infradead.org, linux-mm@kvack.org
Subject: Re: [PATCH net v2 2/2] page_pool: fix IOMMU crash when driver has already unbound
Date: Wed, 2 Oct 2024 09:51:49 +0300 [thread overview]
Message-ID: <CAC_iWjJXWgt9TdBSYGkc=htyeS=VAago5wqXzBgX_Mun76Z42g@mail.gmail.com> (raw)
In-Reply-To: <CAC_iWjKHofqDrp+jOO_QTp_8Op=KeE_jjhjsDUxjRa4vnHYJmQ@mail.gmail.com>
On Wed, 2 Oct 2024 at 09:46, Ilias Apalodimas
<ilias.apalodimas@linaro.org> wrote:
>
> Hi Paolo,
>
> Thanks for taking the time.
>
> On Tue, 1 Oct 2024 at 16:32, Paolo Abeni <pabeni@redhat.com> wrote:
> >
> > On 9/25/24 09:57, Yunsheng Lin wrote:
> > > Networking driver with page_pool support may hand over page
> > > still with dma mapping to network stack and try to reuse that
> > > page after network stack is done with it and passes it back
> > > to page_pool to avoid the penalty of dma mapping/unmapping.
> > > With all the caching in the network stack, some pages may be
> > > held in the network stack without returning to the page_pool
> > > soon enough, and with VF disable causing the driver unbound,
> > > the page_pool does not stop the driver from doing it's
> > > unbounding work, instead page_pool uses workqueue to check
> > > if there is some pages coming back from the network stack
> > > periodically, if there is any, it will do the dma unmmapping
> > > related cleanup work.
> > >
> > > As mentioned in [1], attempting DMA unmaps after the driver
> > > has already unbound may leak resources or at worst corrupt
> > > memory. Fundamentally, the page pool code cannot allow DMA
> > > mappings to outlive the driver they belong to.
> > >
> > > Currently it seems there are at least two cases that the page
> > > is not released fast enough causing dma unmmapping done after
> > > driver has already unbound:
> > > 1. ipv4 packet defragmentation timeout: this seems to cause
> > > delay up to 30 secs.
> > > 2. skb_defer_free_flush(): this may cause infinite delay if
> > > there is no triggering for net_rx_action().
> > >
> > > In order not to do the dma unmmapping after driver has already
> > > unbound and stall the unloading of the networking driver, add
> > > the pool->items array to record all the pages including the ones
> > > which are handed over to network stack, so the page_pool can
> > > do the dma unmmapping for those pages when page_pool_destroy()
> > > is called. As the pool->items need to be large enough to avoid
> > > performance degradation, add a 'item_full' stat to indicate the
> > > allocation failure due to unavailability of pool->items.
> >
> > This looks really invasive, with room for potentially large performance
> > regressions or worse. At very least it does not look suitable for net.
>
> Perhaps, and you are right we need to measure performance before
> pulling it but...
>
> >
> > Is the problem only tied to VFs drivers? It's a pity all the page_pool
> > users will have to pay a bill for it...
>
> It's not. The problem happens when an SKB has been scheduled for
> recycling and has already been mapped via page_pool. If the driver
> disappears in the meantime,
Apologies, this wasn't correct. It's the device that has to disappear
not the driver
> page_pool will free all the packets it
> holds in its private rings (both slow and fast), but is not in control
> of the SKB anymore. So any packets coming back for recycling *after*
> that point cannot unmap memory properly.
>
> As discussed this can either lead to memory corruption and resource
> leaking, or worse as seen in the bug report panics. I am fine with
> this going into -next, but it really is a bugfix, although I am not
> 100% sure that the Fixes: tag in the current patch is correct.
>
> Thanks
> /Ilias
> >
> > /P
> >
prev parent reply other threads:[~2024-10-02 6:52 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20240925075707.3970187-1-linyunsheng@huawei.com>
2024-09-25 7:57 ` Yunsheng Lin
2024-09-26 18:15 ` Mina Almasry
2024-09-27 3:57 ` Yunsheng Lin
2024-09-27 5:54 ` Mina Almasry
2024-09-27 7:25 ` Yunsheng Lin
2024-09-27 9:21 ` Ilias Apalodimas
2024-09-27 9:49 ` Yunsheng Lin
2024-09-27 9:58 ` Ilias Apalodimas
2024-09-27 11:29 ` Yunsheng Lin
2024-09-28 7:34 ` Ilias Apalodimas
2024-09-29 2:44 ` Yunsheng Lin
2024-09-30 8:09 ` Ilias Apalodimas
2024-09-30 8:38 ` Yunsheng Lin
2024-10-01 13:32 ` Paolo Abeni
2024-10-02 2:34 ` Yunsheng Lin
2024-10-02 7:37 ` Paolo Abeni
2024-10-02 8:23 ` Ilias Apalodimas
2024-10-05 12:38 ` Yunsheng Lin
2024-10-02 6:46 ` Ilias Apalodimas
2024-10-02 6:51 ` Ilias Apalodimas [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAC_iWjJXWgt9TdBSYGkc=htyeS=VAago5wqXzBgX_Mun76Z42g@mail.gmail.com' \
--to=ilias.apalodimas@linaro.org \
--cc=akpm@linux-foundation.org \
--cc=aleksander.lobakin@intel.com \
--cc=alexander.duyck@gmail.com \
--cc=angelogioacchino.delregno@collabora.com \
--cc=anthony.l.nguyen@intel.com \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=fanghaiqing@huawei.com \
--cc=hawk@kernel.org \
--cc=imx@lists.linux.dev \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=iommu@lists.linux.dev \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=kvalo@kernel.org \
--cc=leon@kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mediatek@lists.infradead.org \
--cc=linux-mm@kvack.org \
--cc=linux-rdma@vger.kernel.org \
--cc=linux-wireless@vger.kernel.org \
--cc=linyunsheng@huawei.com \
--cc=liuyonglong@huawei.com \
--cc=lorenzo@kernel.org \
--cc=matthias.bgg@gmail.com \
--cc=nbd@nbd.name \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=przemyslaw.kitszel@intel.com \
--cc=robin.murphy@arm.com \
--cc=ryder.lee@mediatek.com \
--cc=saeedm@nvidia.com \
--cc=sean.wang@mediatek.com \
--cc=shayne.chen@mediatek.com \
--cc=shenwei.wang@nxp.com \
--cc=tariqt@nvidia.com \
--cc=wei.fang@nxp.com \
--cc=xiaoning.wang@nxp.com \
--cc=zhangkun09@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox