From: Alistair Popple <apopple@nvidia.com>
To: Bjorn Helgaas <helgaas@kernel.org>
Cc: Hou Tao <houtao@huaweicloud.com>,
linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org,
linux-mm@kvack.org, linux-nvme@lists.infradead.org,
Bjorn Helgaas <bhelgaas@google.com>,
Logan Gunthorpe <logang@deltatee.com>,
Leon Romanovsky <leonro@nvidia.com>,
Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
Tejun Heo <tj@kernel.org>,
"Rafael J . Wysocki" <rafael@kernel.org>,
Danilo Krummrich <dakr@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@kernel.org>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@kernel.dk>,
Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
houtao1@huawei.com
Subject: Re: [PATCH 01/13] PCI/P2PDMA: Release the per-cpu ref of pgmap when vm_insert_page() fails
Date: Mon, 12 Jan 2026 11:12:27 +1100 [thread overview]
Message-ID: <j7uozwnjzvcbhhwlhb6qgwlrve26dfhvwdo3w4uk7lcp57do37@v4dczubjeoy5> (raw)
In-Reply-To: <troogtjpbpaxbx34qu56a3qeu7eat36lzp4pvq4asoejxv3zbp@za7hekzxsgpo>
On 2026-01-12 at 10:21 +1100, Alistair Popple <apopple@nvidia.com> wrote...
> On 2026-01-10 at 02:03 +1100, Bjorn Helgaas <helgaas@kernel.org> wrote...
> > On Fri, Jan 09, 2026 at 11:41:51AM +1100, Alistair Popple wrote:
> > > On 2026-01-09 at 02:55 +1100, Bjorn Helgaas <helgaas@kernel.org> wrote...
> > > > On Thu, Jan 08, 2026 at 02:23:16PM +1100, Alistair Popple wrote:
> > > > > On 2025-12-20 at 15:04 +1100, Hou Tao <houtao@huaweicloud.com> wrote...
> > > > > > From: Hou Tao <houtao1@huawei.com>
> > > > > >
> > > > > > When vm_insert_page() fails in p2pmem_alloc_mmap(), p2pmem_alloc_mmap()
> > > > > > doesn't invoke percpu_ref_put() to free the per-cpu ref of pgmap
> > > > > > acquired after gen_pool_alloc_owner(), and memunmap_pages() will hang
> > > > > > forever when trying to remove the PCIe device.
> > > > > >
> > > > > > Fix it by adding the missed percpu_ref_put().
> > > ...
> >
> > > > Looking at this again, I'm confused about why in the normal, non-error
> > > > case, we do the percpu_ref_tryget_live_rcu(ref), followed by another
> > > > percpu_ref_get(ref) for each page, followed by just a single
> > > > percpu_ref_put() at the exit.
> > > >
> > > > So we do ref_get() "1 + number of pages" times but we only do a single
> > > > ref_put(). Is there a loop of ref_put() for each page elsewhere?
> > >
> > > Right, the per-page ref_put() happens when the page is freed (ie. the struct
> > > page refcount drops to zero) - in this case free_zone_device_folio() will call
> > > p2pdma_folio_free() which has the corresponding percpu_ref_put().
> >
> > I don't see anything that looks like a loop to call ref_put() for each
> > page in free_zone_device_folio() or in p2pdma_folio_free(), but this
> > is all completely out of my range, so I'll take your word for it :)
>
> That's brave :-)
>
> What happens is the core mm takes over managing the page life time once
> vm_insert_page() has been (successfully) called to map the page:
>
> VM_WARN_ON_ONCE_PAGE(!page_ref_count(page), page);
> set_page_count(page, 1);
> ret = vm_insert_page(vma, vaddr, page);
> if (ret) {
> gen_pool_free(p2pdma->pool, (uintptr_t)kaddr, len);
> return ret;
> }
> percpu_ref_get(ref);
> put_page(page);
>
> In the above sequence vm_insert_page() takes a page ref for each page it maps
> into the user page tables with folio_get(). This reference is dropped when the
> user page table entry is removed, typically by the loop in zap_pte_range().
>
> Normally the user page table mapping is the only thing holding a reference so
> it ends up calling folio_put()->free_zone_device_folio->...->ref_put() one page
> at a time as the PTEs are removed from the page tables. At least that's what
> happens conceptually - the TLB batching code makes it hard to actually see where
> the folio_put() is called in this sequence.
>
> Note the extra set_page_count(1) and put_page(page) in the above sequence is
> just to make vm_insert_page() happy - it complains it you try and insert a page
> with a zero page ref.
>
> And looking at that sequence there is another minor bug - in the failure
> path we are exiting the loop with the failed page ref count set to
> 1 from set_page_count(page, 1). That needs to be reset to zero with
> set_page_count(page, 0) to avoid the VM_WARN_ON_ONCE_PAGE() if the page gets
> reused. I will send a fix for that.
Actually the whole failure path above seems wrong to me - we
free the entire allocation with gen_pool_free() even though
vm_insert_page() may have succeeded in mapping some pages. AFAICT the
generic VFS mmap code will call unmap_region() to undo any partial
mapping (see __mmap_new_file_vma) but that should end up calling
folio_put()->zone_free_device_range()->p2pdma_folio_free()->gen_pool_free_owner()
for the mapped pages even though we've already freed the entire pool.
> - Alistair
>
> > Bjorn
>
next prev parent reply other threads:[~2026-01-12 0:12 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-20 4:04 [PATCH 00/13] Enable compound page for p2pdma memory Hou Tao
2025-12-20 4:04 ` [PATCH 01/13] PCI/P2PDMA: Release the per-cpu ref of pgmap when vm_insert_page() fails Hou Tao
2025-12-22 16:49 ` Logan Gunthorpe
2026-01-08 3:23 ` Alistair Popple
2026-01-08 15:55 ` Bjorn Helgaas
2026-01-09 0:41 ` Alistair Popple
2026-01-09 15:03 ` Bjorn Helgaas
2026-01-11 23:21 ` Alistair Popple
2026-01-12 0:12 ` Alistair Popple [this message]
2026-01-12 0:23 ` Alistair Popple
2025-12-20 4:04 ` [PATCH 02/13] PCI/P2PDMA: Fix the warning condition in p2pmem_alloc_mmap() Hou Tao
2025-12-22 16:50 ` Logan Gunthorpe
2026-01-07 14:39 ` Christoph Hellwig
2026-01-07 17:17 ` Bjorn Helgaas
2026-01-07 20:34 ` Bjorn Helgaas
2026-01-08 10:17 ` Christoph Hellwig
2026-01-08 3:28 ` Alistair Popple
2025-12-20 4:04 ` [PATCH 03/13] kernfs: add support for get_unmapped_area callback Hou Tao
2025-12-20 15:43 ` kernel test robot
2025-12-20 15:57 ` kernel test robot
2025-12-20 4:04 ` [PATCH 04/13] kernfs: add support for may_split and pagesize callbacks Hou Tao
2025-12-20 4:04 ` [PATCH 05/13] sysfs: support get_unmapped_area callback for binary file Hou Tao
2025-12-20 4:04 ` [PATCH 06/13] PCI/P2PDMA: add align parameter for pci_p2pdma_add_resource() Hou Tao
2025-12-20 4:04 ` [PATCH 07/13] PCI/P2PDMA: create compound page for aligned p2pdma memory Hou Tao
2026-01-08 5:14 ` Alistair Popple
2025-12-20 4:04 ` [PATCH 08/13] mm/huge_memory: add helpers to insert huge page during mmap Hou Tao
2025-12-20 4:04 ` [PATCH 09/13] PCI/P2PDMA: support get_unmapped_area to return aligned vaddr Hou Tao
2025-12-20 4:04 ` [PATCH 10/13] PCI/P2PDMA: support compound page in p2pmem_alloc_mmap() Hou Tao
2025-12-22 17:04 ` Logan Gunthorpe
2025-12-24 2:20 ` Hou Tao
2026-01-05 17:24 ` Logan Gunthorpe
2026-01-07 20:24 ` Jason Gunthorpe
2026-01-07 21:22 ` Logan Gunthorpe
2026-01-08 5:20 ` Alistair Popple
2025-12-20 4:04 ` [PATCH 11/13] PCI/P2PDMA: add helper pci_p2pdma_max_pagemap_align() Hou Tao
2025-12-20 4:04 ` [PATCH 12/13] nvme-pci: introduce cmb_devmap_align module parameter Hou Tao
2025-12-20 22:22 ` kernel test robot
2025-12-20 4:04 ` [PATCH 13/13] PCI/P2PDMA: enable compound page support for p2pdma memory Hou Tao
2025-12-22 17:10 ` Logan Gunthorpe
2025-12-21 12:19 ` [PATCH 00/13] Enable compound page " Leon Romanovsky
[not found] ` <416b2575-f5e7-7faf-9e7c-6e9df170bf1a@huaweicloud.com>
2025-12-24 1:37 ` Hou Tao
2025-12-24 9:22 ` Leon Romanovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=j7uozwnjzvcbhhwlhb6qgwlrve26dfhvwdo3w4uk7lcp57do37@v4dczubjeoy5 \
--to=apopple@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=bhelgaas@google.com \
--cc=dakr@kernel.org \
--cc=david@kernel.org \
--cc=gregkh@linuxfoundation.org \
--cc=hch@lst.de \
--cc=helgaas@kernel.org \
--cc=houtao1@huawei.com \
--cc=houtao@huaweicloud.com \
--cc=kbusch@kernel.org \
--cc=leonro@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-pci@vger.kernel.org \
--cc=logang@deltatee.com \
--cc=lorenzo.stoakes@oracle.com \
--cc=rafael@kernel.org \
--cc=sagi@grimberg.me \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox