* Re: [PATCH RFC] iommu/dma: Validate page before accessing P2PDMA state
[not found] ` <20260224123221.GM10607@unreal>
@ 2026-02-24 20:57 ` Pranjal Shrivastava
0 siblings, 0 replies; only message in thread
From: Pranjal Shrivastava @ 2026-02-24 20:57 UTC (permalink / raw)
To: Leon Romanovsky
Cc: Ashish Mhetre, robin.murphy, joro, will, iommu, linux-kernel,
linux-tegra, linux-mm
On Tue, Feb 24, 2026 at 02:32:21PM +0200, Leon Romanovsky wrote:
> On Tue, Feb 24, 2026 at 10:42:57AM +0000, Ashish Mhetre wrote:
> > When mapping scatter-gather entries that reference reserved
> > memory regions without struct page backing (e.g., bootloader created
> > carveouts), is_pci_p2pdma_page() dereferences the page pointer
> > returned by sg_page() without first verifying its validity.
>
> I believe this behavior started after commit 88df6ab2f34b
> ("mm: add folio_is_pci_p2pdma()"). Prior to that change, the
> is_zone_device_page(page) check would return false when given a
> non‑existent page pointer.
>
Doesn't folio_is_pci_p2pdma() also check for zone device?
I see[1] that it does:
static inline bool folio_is_pci_p2pdma(const struct folio *folio)
{
return IS_ENABLED(CONFIG_PCI_P2PDMA) &&
folio_is_zone_device(folio) &&
folio->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA;
}
I believe the problem arises due to the page_folio() call in
folio_is_pci_p2pdma(page_folio(page)); within is_pci_p2pdma_page().
page_folio() assumes it has a valid struct page to work with. For these
carveouts, that isn't true.
Potentially something like the following would stop the crash:
diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index e3c2ccf872a8..e47876021afa 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -197,7 +197,8 @@ static inline void folio_set_zone_device_data(struct folio *folio, void *data)
static inline bool is_pci_p2pdma_page(const struct page *page)
{
- return IS_ENABLED(CONFIG_PCI_P2PDMA) &&
+ return IS_ENABLED(CONFIG_PCI_P2PDMA) && page &&
+ pfn_valid(page_to_pfn(page)) &&
folio_is_pci_p2pdma(page_folio(page));
}
But my broader question is: why are we calling a page-based API like
is_pci_p2pdma_page() on non-struct-page memory in the first place?
Could we instead add a helper to verify if the sg_page() return value
is actually backed by a struct page? If it isn't, we should arguably
skip the P2PDMA logic entirely and fall back to a dma_map_phys style
path. Isn't handling these "pageless" physical ranges the primary reason
dma_map_phys exists?
+mm list
Thanks,
Praan
[1] https://elixir.bootlin.com/linux/v6.19.3/source/include/linux/memremap.h#L179
> If any fix is needed, the is_pci_p2pdma_page() must be changed and not iommu.
>
> Thanks
>
> >
> > This causes a kernel paging fault when CONFIG_PCI_P2PDMA is enabled
> > and dma_map_sg_attrs() is called for memory regions that have no
> > associated struct page:
> >
> > Unable to handle kernel paging request at virtual address fffffc007d100000
> > ...
> > Call trace:
> > iommu_dma_map_sg+0x118/0x414
> > dma_map_sg_attrs+0x38/0x44
> >
> > Fix this by adding a pfn_valid() check before calling
> > is_pci_p2pdma_page(). If the page frame number is invalid, skip the
> > P2PDMA check entirely as such memory cannot be P2PDMA memory anyway.
> >
> > Signed-off-by: Ashish Mhetre <amhetre@nvidia.com>
> > ---
> > drivers/iommu/dma-iommu.c | 4 ++++
> > 1 file changed, 4 insertions(+)
> >
> > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > index 5dac64be61bb..5f45f33b23c2 100644
> > --- a/drivers/iommu/dma-iommu.c
> > +++ b/drivers/iommu/dma-iommu.c
> > @@ -1423,6 +1423,9 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
> > size_t s_length = s->length;
> > size_t pad_len = (mask - iova_len + 1) & mask;
> >
> > + if (!pfn_valid(page_to_pfn(sg_page(s))))
> > + goto post_pci_p2pdma;
> > +
> > switch (pci_p2pdma_state(&p2pdma_state, dev, sg_page(s))) {
> > case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
> > /*
> > @@ -1449,6 +1452,7 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
> > goto out_restore_sg;
> > }
> >
> > +post_pci_p2pdma:
> > sg_dma_address(s) = s_iova_off;
> > sg_dma_len(s) = s_length;
> > s->offset -= s_iova_off;
> > --
> > 2.25.1
> >
> >
>
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2026-02-24 20:58 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20260224104257.1641429-1-amhetre@nvidia.com>
[not found] ` <20260224123221.GM10607@unreal>
2026-02-24 20:57 ` [PATCH RFC] iommu/dma: Validate page before accessing P2PDMA state Pranjal Shrivastava
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox