From: Alistair Popple <apopple@nvidia.com>
To: helgaas@kernel.org
Cc: houtao@huaweicloud.com, linux-kernel@vger.kernel.org,
linux-pci@vger.kernel.org, linux-mm@kvack.org,
linux-nvme@lists.infradead.org, bhelgaas@google.com,
logang@deltatee.com, leonro@nvidia.com,
gregkh@linuxfoundation.org, tj@kernel.org, rafael@kernel.org,
dakr@kernel.org, akpm@linux-foundation.org, david@kernel.org,
lorenzo.stoakes@oracle.com, kbusch@kernel.org, axboe@kernel.dk,
hch@lst.de, sagi@grimberg.me, houtao1@huawei.com,
Alistair Popple <apopple@nvidia.com>
Subject: [PATCH] PCI/P2PDMA: Reset page reference count when page mapping fails
Date: Mon, 12 Jan 2026 11:54:40 +1100 [thread overview]
Message-ID: <20260112005440.998543-1-apopple@nvidia.com> (raw)
When mapping a p2pdma page the page reference count is initialised to
1 prior to calling vm_insert_page(). This is to avoid vm_insert_page()
warning if the page refcount is zero. Prior to setting the page count
there is a check to ensure the page is currently free (ie. has a zero
reference count).
However vm_insert_page() can fail. In this case the pages are freed
back to the genalloc pool, but that does not reset the page refcount.
So a future allocation of the same page will see the elevated page
refcount from the previous set_page_count() call triggering the
VM_WARN_ON_ONCE_PAGE checking that the page is free.
Fix this by resetting the page refcount back to zero using
set_page_count(). Note that put_page() is not used because that
would result in freeing the page twice due to implicitly calling
p2pdma_folio_free().
Fixes: b7e282378773 ("mm/mm_init: move p2pdma page refcount initialisation to p2pdma")
Signed-off-by: Alistair Popple <apopple@nvidia.com>
---
This was found by inspection. I don't currently have a good setup that
exercises the p2pmem_alloc_mmap() path so this has only been compile
tested - additional testing would be appreciated.
---
drivers/pci/p2pdma.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
index dd64ec830fdd..3b29246b9e86 100644
--- a/drivers/pci/p2pdma.c
+++ b/drivers/pci/p2pdma.c
@@ -152,6 +152,12 @@ static int p2pmem_alloc_mmap(struct file *filp, struct kobject *kobj,
ret = vm_insert_page(vma, vaddr, page);
if (ret) {
gen_pool_free(p2pdma->pool, (uintptr_t)kaddr, len);
+
+ /*
+ * Reset the page count. We don't use put_page() because
+ * we don't want to trigger the p2pdma_folio_free() path.
+ */
+ set_page_count(page, 0);
percpu_ref_put(ref);
return ret;
}
--
2.51.0
reply other threads:[~2026-01-12 0:55 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260112005440.998543-1-apopple@nvidia.com \
--to=apopple@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=bhelgaas@google.com \
--cc=dakr@kernel.org \
--cc=david@kernel.org \
--cc=gregkh@linuxfoundation.org \
--cc=hch@lst.de \
--cc=helgaas@kernel.org \
--cc=houtao1@huawei.com \
--cc=houtao@huaweicloud.com \
--cc=kbusch@kernel.org \
--cc=leonro@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-pci@vger.kernel.org \
--cc=logang@deltatee.com \
--cc=lorenzo.stoakes@oracle.com \
--cc=rafael@kernel.org \
--cc=sagi@grimberg.me \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox