From: Ankit Agrawal <ankita@nvidia.com>
To: Jason Gunthorpe <jgg@nvidia.com>,
Catalin Marinas <catalin.marinas@arm.com>
Cc: "maz@kernel.org" <maz@kernel.org>,
"oliver.upton@linux.dev" <oliver.upton@linux.dev>,
"joey.gouly@arm.com" <joey.gouly@arm.com>,
"suzuki.poulose@arm.com" <suzuki.poulose@arm.com>,
"yuzenghui@huawei.com" <yuzenghui@huawei.com>,
"will@kernel.org" <will@kernel.org>,
"ryan.roberts@arm.com" <ryan.roberts@arm.com>,
"shahuang@redhat.com" <shahuang@redhat.com>,
"lpieralisi@kernel.org" <lpieralisi@kernel.org>,
"david@redhat.com" <david@redhat.com>,
"ddutile@redhat.com" <ddutile@redhat.com>,
"seanjc@google.com" <seanjc@google.com>,
Aniket Agashe <aniketa@nvidia.com>, Neo Jia <cjia@nvidia.com>,
Kirti Wankhede <kwankhede@nvidia.com>,
Krishnakant Jaju <kjaju@nvidia.com>,
"Tarun Gupta (SW-GPU)" <targupta@nvidia.com>,
Vikram Sethi <vsethi@nvidia.com>,
Andy Currid <acurrid@nvidia.com>,
Alistair Popple <apopple@nvidia.com>,
John Hubbard <jhubbard@nvidia.com>,
Dan Williams <danw@nvidia.com>, Zhi Wang <zhiw@nvidia.com>,
Matt Ochs <mochs@nvidia.com>, Uday Dhoke <udhoke@nvidia.com>,
Dheeraj Nigam <dnigam@nvidia.com>,
"alex.williamson@redhat.com" <alex.williamson@redhat.com>,
"sebastianene@google.com" <sebastianene@google.com>,
"coltonlewis@google.com" <coltonlewis@google.com>,
"kevin.tian@intel.com" <kevin.tian@intel.com>,
"yi.l.liu@intel.com" <yi.l.liu@intel.com>,
"ardb@kernel.org" <ardb@kernel.org>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"gshan@redhat.com" <gshan@redhat.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"tabba@google.com" <tabba@google.com>,
"qperret@google.com" <qperret@google.com>,
"kvmarm@lists.linux.dev" <kvmarm@lists.linux.dev>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>,
"maobibo@loongson.cn" <maobibo@loongson.cn>
Subject: Re: [PATCH v7 4/5] KVM: arm64: Allow cacheable stage 2 mapping using VMA flags
Date: Thu, 19 Jun 2025 12:14:38 +0000 [thread overview]
Message-ID: <SA1PR12MB7199835E63E1EF48C7C7638DB07DA@SA1PR12MB7199.namprd12.prod.outlook.com> (raw)
In-Reply-To: <20250618163836.GA1629589@nvidia.com>
>> > - disable_cmo = true;
>> > + if (!is_vma_cacheable)
>> > + disable_cmo = true;
>>
>> I'm tempted to stick to the 'device' variable name. Or something like
>> s2_noncacheable. As I commented, it's not just about disabling CMOs.
>
> I think it would be clearer to have two concepts/variable then because
> the cases where it is really about preventing cachable access to
> prevent aborts are not linked to the logic that checks pfn valid. We
> have to detect those cases separately (through the VMA flags was it?).
>
> Having these two things together is IMHO confusing..
>
> Jason
Thanks Catalin and Jason for the comments.
Considering the feedback, I think we may do the following here:
1. Rename the device variable to S2_noncacheable to represent if the S2
is going to be marked non cacheable. Otherwise S2 will be mapped NORMAL.
2. Detect what PFN has to be marked S2_noncacheable. If a PFN is not in the
kernel map, mark as S2 except for PFNMAP + VMA cacheable.
3. Prohibit cacheable PFNMAP if hardware doesn't support FWB and CACHE DIC.
4. Prohibit S2 non cached mapping for cacheable VMA for all cases, whether
pre-FWB hardware or not.
This would be how the patch would look.
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 339194441a25..979668d475bd 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1516,8 +1516,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
{
int ret = 0;
bool write_fault, writable, force_pte = false;
- bool exec_fault, mte_allowed, is_vma_cacheable;
- bool device = false, vfio_allow_any_uc = false;
+ bool exec_fault, mte_allowed, is_vma_cacheable, cacheable_pfnmap = false;
+ bool s2_noncacheable = false, vfio_allow_any_uc = false;
unsigned long mmu_seq;
phys_addr_t ipa = fault_ipa;
struct kvm *kvm = vcpu->kvm;
@@ -1660,6 +1660,15 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
is_vma_cacheable = kvm_vma_is_cacheable(vma);
+ if (vma->vm_flags & VM_PFNMAP) {
+ /* Reject COW VM_PFNMAP */
+ if (is_cow_mapping(vma->vm_flags))
+ return -EINVAL;
+
+ if (is_vma_cacheable)
+ cacheable_pfnmap = true;
+ }
+
/* Don't use the VMA after the unlock -- it may have vanished */
vma = NULL;
@@ -1684,8 +1693,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
return -EFAULT;
if (kvm_is_device_pfn(pfn)) {
- if (is_vma_cacheable)
- return -EINVAL;
+ /*
+ * When FWB is unsupported KVM needs to do cache flushes
+ * (via dcache_clean_inval_poc()) of the underlying memory. This is
+ * only possible if the memory is already mapped into the kernel map.
+ *
+ * Outright reject as the cacheable device memory is not present in
+ * the kernel map and not suitable for cache management.
+ */
+ if (cacheable_pfnmap && !kvm_arch_supports_cacheable_pfnmap())
+ return -EFAULT;
/*
* If the page was identified as device early by looking at
@@ -1696,8 +1713,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
*
* In both cases, we don't let transparent_hugepage_adjust()
* change things at the last minute.
+ *
+ * Allow S2 to be mapped cacheable for PFNMAP device memory
+ * marked as cacheable in VMA. Note that such mapping is safe
+ * as the KVM S2 will have the same Normal memory type as the
+ * VMA has in the S1.
*/
- device = true;
+ if (!cacheable_pfnmap)
+ s2_noncacheable = true;
} else if (logging_active && !write_fault) {
/*
* Only actually map the page as writable if this was a write
@@ -1706,7 +1729,15 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
writable = false;
}
- if (exec_fault && device)
+ /*
+ * Prohibit a region to be mapped non cacheable in S2 and marked as
+ * cacheabled in the userspace VMA. Such mismatched mapping is a
+ * security risk.
+ */
+ if (is_vma_cacheable && s2_noncacheable)
+ return -EINVAL;
+
+ if (exec_fault && s2_noncacheable)
return -ENOEXEC;
/*
@@ -1739,7 +1770,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
* If we are not forced to use page mapping, check if we are
* backed by a THP and thus use block mapping if possible.
*/
- if (vma_pagesize == PAGE_SIZE && !(force_pte || device)) {
+ if (vma_pagesize == PAGE_SIZE && !(force_pte || s2_noncacheable)) {
if (fault_is_perm && fault_granule > PAGE_SIZE)
vma_pagesize = fault_granule;
else
@@ -1753,7 +1784,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
}
}
- if (!fault_is_perm && !device && kvm_has_mte(kvm)) {
+ if (!fault_is_perm && !s2_noncacheable && kvm_has_mte(kvm)) {
/* Check the VMM hasn't introduced a new disallowed VMA */
if (mte_allowed) {
sanitise_mte_tags(kvm, pfn, vma_pagesize);
@@ -1769,7 +1800,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (exec_fault)
prot |= KVM_PGTABLE_PROT_X;
- if (device) {
+ if (s2_noncacheable) {
if (vfio_allow_any_uc)
prot |= KVM_PGTABLE_PROT_NORMAL_NC;
else
@@ -2266,8 +2297,12 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
break;
}
- /* Cacheable PFNMAP is not allowed */
- if (kvm_vma_is_cacheable(vma)) {
+ /*
+ * Cacheable PFNMAP is allowed only if the hardware
+ * supports it.
+ */
+ if (kvm_vma_is_cacheable(vma) &&
+ !kvm_arch_supports_cacheable_pfnmap()) {
ret = -EINVAL;
break;
}
next prev parent reply other threads:[~2025-06-19 12:14 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-18 6:55 [PATCH v7 0/5] KVM: arm64: Map GPU device memory as cacheable ankita
2025-06-18 6:55 ` [PATCH v7 1/5] KVM: arm64: Rename symbols to reflect whether CMO may be used ankita
2025-06-18 14:28 ` Catalin Marinas
2025-06-18 14:35 ` Catalin Marinas
2025-06-19 2:22 ` Ankit Agrawal
2025-06-18 6:55 ` [PATCH v7 2/5] KVM: arm64: Block cacheable PFNMAP mapping ankita
2025-06-18 15:46 ` Catalin Marinas
2025-06-19 2:21 ` Ankit Agrawal
2025-06-18 6:55 ` [PATCH v7 3/5] KVM: arm64: New function to determine hardware cache management support ankita
2025-06-18 16:12 ` Catalin Marinas
2025-06-18 6:55 ` [PATCH v7 4/5] KVM: arm64: Allow cacheable stage 2 mapping using VMA flags ankita
2025-06-18 16:34 ` Catalin Marinas
2025-06-18 16:38 ` Jason Gunthorpe
2025-06-19 12:14 ` Ankit Agrawal [this message]
2025-06-19 14:16 ` Jason Gunthorpe
2025-06-19 16:03 ` Donald Dutile
2025-06-19 16:46 ` Ankit Agrawal
2025-06-18 6:55 ` [PATCH v7 5/5] KVM: arm64: Expose new KVM cap for cacheable PFNMAP ankita
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=SA1PR12MB7199835E63E1EF48C7C7638DB07DA@SA1PR12MB7199.namprd12.prod.outlook.com \
--to=ankita@nvidia.com \
--cc=acurrid@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=alex.williamson@redhat.com \
--cc=aniketa@nvidia.com \
--cc=apopple@nvidia.com \
--cc=ardb@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=cjia@nvidia.com \
--cc=coltonlewis@google.com \
--cc=danw@nvidia.com \
--cc=david@redhat.com \
--cc=ddutile@redhat.com \
--cc=dnigam@nvidia.com \
--cc=gshan@redhat.com \
--cc=jgg@nvidia.com \
--cc=jhubbard@nvidia.com \
--cc=joey.gouly@arm.com \
--cc=kevin.tian@intel.com \
--cc=kjaju@nvidia.com \
--cc=kvmarm@lists.linux.dev \
--cc=kwankhede@nvidia.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lpieralisi@kernel.org \
--cc=maobibo@loongson.cn \
--cc=maz@kernel.org \
--cc=mochs@nvidia.com \
--cc=oliver.upton@linux.dev \
--cc=qperret@google.com \
--cc=ryan.roberts@arm.com \
--cc=seanjc@google.com \
--cc=sebastianene@google.com \
--cc=shahuang@redhat.com \
--cc=suzuki.poulose@arm.com \
--cc=tabba@google.com \
--cc=targupta@nvidia.com \
--cc=udhoke@nvidia.com \
--cc=vsethi@nvidia.com \
--cc=will@kernel.org \
--cc=yi.l.liu@intel.com \
--cc=yuzenghui@huawei.com \
--cc=zhiw@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox