From: Sean Christopherson <seanjc@google.com>
To: Peter Xu <peterx@redhat.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Oscar Salvador <osalvador@suse.de>,
Jason Gunthorpe <jgg@nvidia.com>,
Axel Rasmussen <axelrasmussen@google.com>,
linux-arm-kernel@lists.infradead.org, x86@kernel.org,
Will Deacon <will@kernel.org>, Gavin Shan <gshan@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>, Zi Yan <ziy@nvidia.com>,
Andrew Morton <akpm@linux-foundation.org>,
Catalin Marinas <catalin.marinas@arm.com>,
Ingo Molnar <mingo@redhat.com>,
Alistair Popple <apopple@nvidia.com>,
Borislav Petkov <bp@alien8.de>,
David Hildenbrand <david@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
kvm@vger.kernel.org, Dave Hansen <dave.hansen@linux.intel.com>,
Alex Williamson <alex.williamson@redhat.com>,
Yan Zhao <yan.y.zhao@intel.com>
Subject: Re: [PATCH 09/19] mm: New follow_pfnmap API
Date: Fri, 16 Aug 2024 16:12:24 -0700 [thread overview]
Message-ID: <Zr_c2C06eusc_b1l@google.com> (raw)
In-Reply-To: <20240809160909.1023470-10-peterx@redhat.com>
On Fri, Aug 09, 2024, Peter Xu wrote:
> Introduce a pair of APIs to follow pfn mappings to get entry information.
> It's very similar to what follow_pte() does before, but different in that
> it recognizes huge pfn mappings.
...
> +int follow_pfnmap_start(struct follow_pfnmap_args *args);
> +void follow_pfnmap_end(struct follow_pfnmap_args *args);
I find the start+end() terminology to be unintuitive. E.g. I had to look at the
implementation to understand why KVM invoke fixup_user_fault() if follow_pfnmap_start()
failed.
What about follow_pfnmap_and_lock()? And then maybe follow_pfnmap_unlock()?
Though that second one reads a little weird.
> + * Return: zero on success, -ve otherwise.
ve?
> +int follow_pfnmap_start(struct follow_pfnmap_args *args)
> +{
> + struct vm_area_struct *vma = args->vma;
> + unsigned long address = args->address;
> + struct mm_struct *mm = vma->vm_mm;
> + spinlock_t *lock;
> + pgd_t *pgdp;
> + p4d_t *p4dp, p4d;
> + pud_t *pudp, pud;
> + pmd_t *pmdp, pmd;
> + pte_t *ptep, pte;
> +
> + pfnmap_lockdep_assert(vma);
> +
> + if (unlikely(address < vma->vm_start || address >= vma->vm_end))
> + goto out;
> +
> + if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)))
> + goto out;
Why use goto intead of simply?
return -EINVAL;
That's relevant because I think the cases where no PxE is found should return
-ENOENT, not -EINVAL. E.g. if the caller doesn't precheck, then it can bail
immediately on EINVAL, but know that it's worth trying to fault-in the pfn on
ENOENT.
> +retry:
> + pgdp = pgd_offset(mm, address);
> + if (pgd_none(*pgdp) || unlikely(pgd_bad(*pgdp)))
> + goto out;
> +
> + p4dp = p4d_offset(pgdp, address);
> + p4d = READ_ONCE(*p4dp);
> + if (p4d_none(p4d) || unlikely(p4d_bad(p4d)))
> + goto out;
> +
> + pudp = pud_offset(p4dp, address);
> + pud = READ_ONCE(*pudp);
> + if (pud_none(pud))
> + goto out;
> + if (pud_leaf(pud)) {
> + lock = pud_lock(mm, pudp);
> + if (!unlikely(pud_leaf(pud))) {
> + spin_unlock(lock);
> + goto retry;
> + }
> + pfnmap_args_setup(args, lock, NULL, pud_pgprot(pud),
> + pud_pfn(pud), PUD_MASK, pud_write(pud),
> + pud_special(pud));
> + return 0;
> + }
> +
> + pmdp = pmd_offset(pudp, address);
> + pmd = pmdp_get_lockless(pmdp);
> + if (pmd_leaf(pmd)) {
> + lock = pmd_lock(mm, pmdp);
> + if (!unlikely(pmd_leaf(pmd))) {
> + spin_unlock(lock);
> + goto retry;
> + }
> + pfnmap_args_setup(args, lock, NULL, pmd_pgprot(pmd),
> + pmd_pfn(pmd), PMD_MASK, pmd_write(pmd),
> + pmd_special(pmd));
> + return 0;
> + }
> +
> + ptep = pte_offset_map_lock(mm, pmdp, address, &lock);
> + if (!ptep)
> + goto out;
> + pte = ptep_get(ptep);
> + if (!pte_present(pte))
> + goto unlock;
> + pfnmap_args_setup(args, lock, ptep, pte_pgprot(pte),
> + pte_pfn(pte), PAGE_MASK, pte_write(pte),
> + pte_special(pte));
> + return 0;
> +unlock:
> + pte_unmap_unlock(ptep, lock);
> +out:
> + return -EINVAL;
> +}
> +EXPORT_SYMBOL_GPL(follow_pfnmap_start);
next prev parent reply other threads:[~2024-08-16 23:12 UTC|newest]
Thread overview: 90+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-09 16:08 [PATCH 00/19] mm: Support huge pfnmaps Peter Xu
2024-08-09 16:08 ` [PATCH 01/19] mm: Introduce ARCH_SUPPORTS_HUGE_PFNMAP and special bits to pmd/pud Peter Xu
2024-08-09 16:34 ` David Hildenbrand
2024-08-09 17:16 ` Peter Xu
2024-08-09 18:06 ` David Hildenbrand
2024-08-09 16:08 ` [PATCH 02/19] mm: Drop is_huge_zero_pud() Peter Xu
2024-08-09 16:34 ` David Hildenbrand
2024-08-14 12:38 ` Jason Gunthorpe
2024-08-09 16:08 ` [PATCH 03/19] mm: Mark special bits for huge pfn mappings when inject Peter Xu
2024-08-14 12:40 ` Jason Gunthorpe
2024-08-14 15:23 ` Peter Xu
2024-08-14 15:53 ` Jason Gunthorpe
2024-08-09 16:08 ` [PATCH 04/19] mm: Allow THP orders for PFNMAPs Peter Xu
2024-08-14 12:40 ` Jason Gunthorpe
2024-08-09 16:08 ` [PATCH 05/19] mm/gup: Detect huge pfnmap entries in gup-fast Peter Xu
2024-08-09 16:23 ` David Hildenbrand
2024-08-09 16:59 ` Peter Xu
2024-08-14 12:42 ` Jason Gunthorpe
2024-08-14 15:34 ` Peter Xu
2024-08-14 12:41 ` Jason Gunthorpe
2024-08-09 16:08 ` [PATCH 07/19] mm/fork: Accept huge pfnmap entries Peter Xu
2024-08-09 16:32 ` David Hildenbrand
2024-08-09 17:15 ` Peter Xu
2024-08-09 17:59 ` David Hildenbrand
2024-08-12 18:29 ` Peter Xu
2024-08-12 18:50 ` David Hildenbrand
2024-08-12 19:05 ` Peter Xu
2024-08-09 16:08 ` [PATCH 08/19] mm: Always define pxx_pgprot() Peter Xu
2024-08-14 13:09 ` Jason Gunthorpe
2024-08-14 15:43 ` Peter Xu
2024-08-09 16:08 ` [PATCH 09/19] mm: New follow_pfnmap API Peter Xu
2024-08-14 13:19 ` Jason Gunthorpe
2024-08-14 18:24 ` Peter Xu
2024-08-14 22:14 ` Jason Gunthorpe
2024-08-15 15:41 ` Peter Xu
2024-08-15 16:16 ` Jason Gunthorpe
2024-08-15 17:21 ` Peter Xu
2024-08-15 17:24 ` Jason Gunthorpe
2024-08-15 18:52 ` Peter Xu
2024-08-16 23:12 ` Sean Christopherson [this message]
2024-08-17 11:05 ` David Hildenbrand
2024-08-21 19:10 ` Peter Xu
2024-08-09 16:09 ` [PATCH 10/19] KVM: Use " Peter Xu
2024-08-09 17:23 ` Axel Rasmussen
2024-08-12 18:58 ` Peter Xu
2024-08-12 22:47 ` Axel Rasmussen
2024-08-12 23:44 ` Sean Christopherson
2024-08-14 13:15 ` Jason Gunthorpe
2024-08-14 14:23 ` Sean Christopherson
2024-08-09 16:09 ` [PATCH 11/19] s390/pci_mmio: " Peter Xu
2024-08-09 16:09 ` [PATCH 12/19] mm/x86/pat: Use the new " Peter Xu
2024-08-09 16:09 ` [PATCH 13/19] vfio: " Peter Xu
2024-08-14 13:20 ` Jason Gunthorpe
2024-08-09 16:09 ` [PATCH 14/19] acrn: " Peter Xu
2024-08-09 16:09 ` [PATCH 15/19] mm/access_process_vm: " Peter Xu
2024-08-09 16:09 ` [PATCH 16/19] mm: Remove follow_pte() Peter Xu
2024-08-09 16:09 ` [PATCH 17/19] mm/x86: Support large pfn mappings Peter Xu
2024-08-09 16:09 ` [PATCH 18/19] mm/arm64: " Peter Xu
2024-08-09 16:09 ` [PATCH 19/19] vfio/pci: Implement huge_fault support Peter Xu
2024-08-14 13:25 ` Jason Gunthorpe
2024-08-14 16:08 ` Alex Williamson
2024-08-14 16:24 ` Jason Gunthorpe
[not found] ` <20240809160909.1023470-7-peterx@redhat.com>
2024-08-09 16:20 ` [PATCH 06/19] mm/pagewalk: Check pfnmap early for folio_walk_start() David Hildenbrand
2024-08-09 16:54 ` Peter Xu
2024-08-09 17:25 ` David Hildenbrand
2024-08-09 21:37 ` Peter Xu
2024-08-14 13:05 ` Jason Gunthorpe
2024-08-16 9:30 ` David Hildenbrand
2024-08-16 14:21 ` Peter Xu
2024-08-16 17:38 ` Jason Gunthorpe
2024-08-21 18:42 ` Peter Xu
2024-08-16 17:56 ` David Hildenbrand
2024-08-19 12:19 ` Jason Gunthorpe
2024-08-19 14:19 ` Sean Christopherson
2024-08-09 18:12 ` [PATCH 00/19] mm: Support huge pfnmaps David Hildenbrand
2024-08-14 12:37 ` Jason Gunthorpe
2024-08-14 14:35 ` Sean Christopherson
2024-08-14 14:42 ` Paolo Bonzini
2024-08-14 14:43 ` Jason Gunthorpe
2024-08-14 20:54 ` Sean Christopherson
2024-08-14 22:00 ` Sean Christopherson
2024-08-14 22:10 ` Jason Gunthorpe
2024-08-14 23:36 ` Oliver Upton
2024-08-14 23:27 ` Oliver Upton
2024-08-14 23:38 ` Oliver Upton
2024-08-15 0:23 ` Sean Christopherson
2024-08-15 19:20 ` Peter Xu
2024-08-16 3:05 ` Kefeng Wang
2024-08-16 14:33 ` Peter Xu
2024-08-19 13:14 ` Kefeng Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zr_c2C06eusc_b1l@google.com \
--to=seanjc@google.com \
--cc=akpm@linux-foundation.org \
--cc=alex.williamson@redhat.com \
--cc=apopple@nvidia.com \
--cc=axelrasmussen@google.com \
--cc=bp@alien8.de \
--cc=catalin.marinas@arm.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=gshan@redhat.com \
--cc=jgg@nvidia.com \
--cc=kvm@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@redhat.com \
--cc=osalvador@suse.de \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=tglx@linutronix.de \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=yan.y.zhao@intel.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox