linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH V2 0/3] Add NUMA mempolicy support for KVM guest_memfd
@ 2024-09-19  9:44 Shivank Garg
  2024-09-19  9:44 ` [RFC PATCH V2 1/3] KVM: guest_memfd: Extend creation API to support NUMA mempolicy Shivank Garg
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Shivank Garg @ 2024-09-19  9:44 UTC (permalink / raw)
  To: pbonzini, corbet, akpm, willy
  Cc: acme, namhyung, mpe, isaku.yamahata, joel, kvm, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, shivankg, bharata, nikunj

The current implementation of KVM guest-memfd does not honor the settings
provided by VMM. While mbind() can be used for NUMA policy support in
userspace applications, it is not functional for guest-memfd as the memory
is not mapped to userspace.

This patch-series adds support to specify NUMA memory policy for guests
with private guest-memfd memory backend. KVM guest-memfd support for
memory backend is already available in QEMU RAMBlock. However, the NUMA
support was missing. This cause memory allocation from guest to randomly
allocate on host NUMA nodes even when passing policy and host-nodes in the
QEMU command. It ensures that VMM provided NUMA policy is adhered.

This feature is particularly useful for SEV-SNP guests as they require
guest_memfd memory backend for allocations. Workloads with high memory-
locality are likely to benefit with this change.

Users can provide a policy mode such as default, bind, interleave, or
preferred along with a list of node IDs from the host machine.

To try this patch-series, build the custom QEMU with NUMA supported KVM
guest-memfd:
QEMU tree- https://github.com/AMDESE/qemu/tree/NUMA_guest_memfd
For instance, to run a SEV-SNP guest bound to NUMA Node 0 of the host,
the corresponding QEMU command would be:

$ qemu-system-x86_64 \
   -enable-kvm \
  ...
   -machine memory-encryption=sev0,vmport=off \
   -object sev-snp-guest,id=sev0,cbitpos=51,reduced-phys-bits=1 \
   -numa node,nodeid=0,memdev=ram0,cpus=0-15 \
   -object memory-backend-memfd,id=ram0,policy=bind,host-nodes=0,size=1024M,share=true,prealloc=false


v2:
- Add fixes suggested by Matthew Wilcox

v1: https://lore.kernel.org/linux-mm/20240916165743.201087-1-shivankg@amd.com

Shivansh Dhiman (3):
  KVM: guest_memfd: Extend creation API to support NUMA mempolicy
  mm: Add mempolicy support to the filemap layer
  KVM: guest_memfd: Enforce NUMA mempolicy if available

 Documentation/virt/kvm/api.rst | 13 ++++++++-
 include/linux/mempolicy.h      |  4 +++
 include/linux/pagemap.h        | 40 ++++++++++++++++++++++++++
 include/uapi/linux/kvm.h       |  5 +++-
 mm/filemap.c                   | 30 ++++++++++++++++----
 mm/mempolicy.c                 | 52 ++++++++++++++++++++++++++++++++++
 tools/include/uapi/linux/kvm.h |  5 +++-
 virt/kvm/guest_memfd.c         | 28 ++++++++++++++----
 virt/kvm/kvm_mm.h              |  3 ++
 9 files changed, 167 insertions(+), 13 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [RFC PATCH V2 1/3] KVM: guest_memfd: Extend creation API to support NUMA mempolicy
  2024-09-19  9:44 [RFC PATCH V2 0/3] Add NUMA mempolicy support for KVM guest_memfd Shivank Garg
@ 2024-09-19  9:44 ` Shivank Garg
  2024-09-23  8:01   ` Chao Gao
  2024-09-19  9:44 ` [RFC PATCH V2 2/3] mm: Add mempolicy support to the filemap layer Shivank Garg
  2024-09-19  9:44 ` [RFC PATCH V2 3/3] KVM: guest_memfd: Enforce NUMA mempolicy if available Shivank Garg
  2 siblings, 1 reply; 6+ messages in thread
From: Shivank Garg @ 2024-09-19  9:44 UTC (permalink / raw)
  To: pbonzini, corbet, akpm, willy
  Cc: acme, namhyung, mpe, isaku.yamahata, joel, kvm, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, shivankg, bharata, nikunj,
	Shivansh Dhiman

From: Shivansh Dhiman <shivansh.dhiman@amd.com>

Extend the API of creating guest-memfd to introduce proper NUMA support,
allowing VMM to set memory policies effectively. The memory policy defines
from which node memory is allocated.

The current implementation of KVM guest-memfd does not honor the settings
provided by VMM. While mbind() can be used for NUMA policy support in
userspace applications, it is not functional for guest-memfd as the memory
is not mapped to userspace.

Currently, SEV-SNP guest use guest-memfd as a memory backend and would
benefit from NUMA support. It enables fine-grained control over memory
allocation, optimizing performance for specific workload requirements.

To apply memory policy on a guest-memfd, extend the KVM_CREATE_GUEST_MEMFD
IOCTL with additional fields related to mempolicy.
- mpol_mode represents the policy mode (default, bind, interleave, or
  preferred).
- host_nodes_addr denotes the userspace address of the nodemask, a bit
  mask of nodes containing up to maxnode bits.
- First bit of flags must be set to use mempolicy.

Store the mempolicy struct in i_private_data of the memfd's inode, which
is currently unused in the context of guest-memfd.

Signed-off-by: Shivansh Dhiman <shivansh.dhiman@amd.com>
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
 Documentation/virt/kvm/api.rst | 13 ++++++++-
 include/linux/mempolicy.h      |  4 +++
 include/uapi/linux/kvm.h       |  5 +++-
 mm/mempolicy.c                 | 52 ++++++++++++++++++++++++++++++++++
 tools/include/uapi/linux/kvm.h |  5 +++-
 virt/kvm/guest_memfd.c         | 21 ++++++++++++--
 virt/kvm/kvm_mm.h              |  3 ++
 7 files changed, 97 insertions(+), 6 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index b3be87489108..dcb61282c773 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -6346,7 +6346,10 @@ and cannot be resized  (guest_memfd files do however support PUNCH_HOLE).
   struct kvm_create_guest_memfd {
 	__u64 size;
 	__u64 flags;
-	__u64 reserved[6];
+	__u64 host_nodes_addr;
+	__u16 maxnode;
+	__u8 mpol_mode;
+	__u8 reserved[37];
   };
 
 Conceptually, the inode backing a guest_memfd file represents physical memory,
@@ -6367,6 +6370,14 @@ a single guest_memfd file, but the bound ranges must not overlap).
 
 See KVM_SET_USER_MEMORY_REGION2 for additional details.
 
+NUMA memory policy support for KVM guest_memfd allows the host to specify
+memory allocation behavior for guest NUMA nodes, similar to mbind(). If
+KVM_GUEST_MEMFD_NUMA_ENABLE flag is set, memory allocations from the guest
+will use the specified policy and host-nodes for physical memory.
+- mpol_mode refers to the policy mode: default, preferred, bind, interleave, or
+  preferred.
+- host_nodes_addr points to bitmask of nodes containing up to maxnode bits.
+
 4.143 KVM_PRE_FAULT_MEMORY
 ---------------------------
 
diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 1add16f21612..468eeda2ec2f 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -299,4 +299,8 @@ static inline bool mpol_is_preferred_many(struct mempolicy *pol)
 }
 
 #endif /* CONFIG_NUMA */
+
+struct mempolicy *create_mpol_from_args(unsigned char mode,
+					const unsigned long __user *nmask,
+					unsigned short maxnode);
 #endif
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 637efc055145..fda6cbef0a1d 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1561,7 +1561,10 @@ struct kvm_memory_attributes {
 struct kvm_create_guest_memfd {
 	__u64 size;
 	__u64 flags;
-	__u64 reserved[6];
+	__u64 host_nodes_addr;
+	__u16 maxnode;
+	__u8 mpol_mode;
+	__u8 reserved[37];
 };
 
 #define KVM_PRE_FAULT_MEMORY	_IOWR(KVMIO, 0xd5, struct kvm_pre_fault_memory)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index b858e22b259d..9e9450433fcc 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -3557,3 +3557,55 @@ static int __init mempolicy_sysfs_init(void)
 
 late_initcall(mempolicy_sysfs_init);
 #endif /* CONFIG_SYSFS */
+
+#ifdef CONFIG_KVM_PRIVATE_MEM
+/**
+ * create_mpol_from_args - create a mempolicy structure from args
+ * @mode:  NUMA memory policy mode
+ * @nmask:  bitmask of NUMA nodes
+ * @maxnode:  number of bits in the nodes bitmask
+ *
+ * Create a mempolicy from given nodemask and memory policy such as
+ * default, preferred, interleave or bind.
+ *
+ * Return: error encoded in a pointer or memory policy on success.
+ */
+struct mempolicy *create_mpol_from_args(unsigned char mode,
+					const unsigned long __user *nmask,
+					unsigned short maxnode)
+{
+	struct mm_struct *mm = current->mm;
+	unsigned short mode_flags;
+	struct mempolicy *mpol;
+	nodemask_t nodes;
+	int lmode = mode;
+	int err = -ENOMEM;
+
+	err = sanitize_mpol_flags(&lmode, &mode_flags);
+	if (err)
+		return ERR_PTR(err);
+
+	err = get_nodes(&nodes, nmask, maxnode);
+	if (err)
+		return ERR_PTR(err);
+
+	mpol = mpol_new(mode, mode_flags, &nodes);
+	if (IS_ERR_OR_NULL(mpol))
+		return mpol;
+
+	NODEMASK_SCRATCH(scratch);
+	if (!scratch)
+		return ERR_PTR(-ENOMEM);
+
+	mmap_write_lock(mm);
+	err = mpol_set_nodemask(mpol, &nodes, scratch);
+	mmap_write_unlock(mm);
+	NODEMASK_SCRATCH_FREE(scratch);
+
+	if (err)
+		return ERR_PTR(err);
+
+	return mpol;
+}
+EXPORT_SYMBOL(create_mpol_from_args);
+#endif
diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h
index e5af8c692dc0..e3effcd1e358 100644
--- a/tools/include/uapi/linux/kvm.h
+++ b/tools/include/uapi/linux/kvm.h
@@ -1546,7 +1546,10 @@ struct kvm_memory_attributes {
 struct kvm_create_guest_memfd {
 	__u64 size;
 	__u64 flags;
-	__u64 reserved[6];
+	__u64 host_nodes_addr;
+	__u16 maxnode;
+	__u8 mpol_mode;
+	__u8 reserved[37];
 };
 
 #define KVM_PRE_FAULT_MEMORY	_IOWR(KVMIO, 0xd5, struct kvm_pre_fault_memory)
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index e930014b4bdc..8f1877be4976 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -4,6 +4,7 @@
 #include <linux/kvm_host.h>
 #include <linux/pagemap.h>
 #include <linux/anon_inodes.h>
+#include <linux/mempolicy.h>
 
 #include "kvm_mm.h"
 
@@ -445,7 +446,8 @@ static const struct inode_operations kvm_gmem_iops = {
 	.setattr	= kvm_gmem_setattr,
 };
 
-static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
+static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags,
+			     struct mempolicy *pol)
 {
 	const char *anon_name = "[kvm-gmem]";
 	struct kvm_gmem *gmem;
@@ -478,6 +480,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
 	inode->i_private = (void *)(unsigned long)flags;
 	inode->i_op = &kvm_gmem_iops;
 	inode->i_mapping->a_ops = &kvm_gmem_aops;
+	inode->i_mapping->i_private_data = (void *)pol;
 	inode->i_mode |= S_IFREG;
 	inode->i_size = size;
 	mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
@@ -505,7 +508,8 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
 {
 	loff_t size = args->size;
 	u64 flags = args->flags;
-	u64 valid_flags = 0;
+	u64 valid_flags = GUEST_MEMFD_NUMA_ENABLE;
+	struct mempolicy *mpol = NULL;
 
 	if (flags & ~valid_flags)
 		return -EINVAL;
@@ -513,7 +517,18 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
 	if (size <= 0 || !PAGE_ALIGNED(size))
 		return -EINVAL;
 
-	return __kvm_gmem_create(kvm, size, flags);
+	if (flags & GUEST_MEMFD_NUMA_ENABLE) {
+		unsigned char mode = args->mpol_mode;
+		unsigned short maxnode = args->maxnode;
+		const unsigned long __user *user_nmask =
+				(const unsigned long *)args->host_nodes_addr;
+
+		mpol = create_mpol_from_args(mode, user_nmask, maxnode);
+		if (IS_ERR_OR_NULL(mpol))
+			return PTR_ERR(mpol);
+	}
+
+	return __kvm_gmem_create(kvm, size, flags, mpol);
 }
 
 int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h
index 715f19669d01..3dd8495ae03d 100644
--- a/virt/kvm/kvm_mm.h
+++ b/virt/kvm/kvm_mm.h
@@ -36,6 +36,9 @@ static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,
 #endif /* HAVE_KVM_PFNCACHE */
 
 #ifdef CONFIG_KVM_PRIVATE_MEM
+/* Flag to check NUMA policy while creating KVM guest-memfd. */
+#define GUEST_MEMFD_NUMA_ENABLE BIT_ULL(0)
+
 void kvm_gmem_init(struct module *module);
 int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args);
 int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
-- 
2.34.1



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [RFC PATCH V2 2/3] mm: Add mempolicy support to the filemap layer
  2024-09-19  9:44 [RFC PATCH V2 0/3] Add NUMA mempolicy support for KVM guest_memfd Shivank Garg
  2024-09-19  9:44 ` [RFC PATCH V2 1/3] KVM: guest_memfd: Extend creation API to support NUMA mempolicy Shivank Garg
@ 2024-09-19  9:44 ` Shivank Garg
  2024-09-19  9:44 ` [RFC PATCH V2 3/3] KVM: guest_memfd: Enforce NUMA mempolicy if available Shivank Garg
  2 siblings, 0 replies; 6+ messages in thread
From: Shivank Garg @ 2024-09-19  9:44 UTC (permalink / raw)
  To: pbonzini, corbet, akpm, willy
  Cc: acme, namhyung, mpe, isaku.yamahata, joel, kvm, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, shivankg, bharata, nikunj,
	Shivansh Dhiman

From: Shivansh Dhiman <shivansh.dhiman@amd.com>

Introduce mempolicy support to the filemap. Add filemap_grab_folio_mpol,
filemap_alloc_folio_mpol_noprof() and __filemap_get_folio_mpol() APIs that
take mempolicy struct as an argument.

The API is required by VMs using KVM guest-memfd memory backends for NUMA
mempolicy aware allocations.

Signed-off-by: Shivansh Dhiman <shivansh.dhiman@amd.com>
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
 include/linux/pagemap.h | 40 ++++++++++++++++++++++++++++++++++++++++
 mm/filemap.c            | 30 +++++++++++++++++++++++++-----
 2 files changed, 65 insertions(+), 5 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index d9c7edb6422b..b05b696f310b 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -564,15 +564,25 @@ static inline void *detach_page_private(struct page *page)
 
 #ifdef CONFIG_NUMA
 struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order);
+struct folio *filemap_alloc_folio_mpol_noprof(gfp_t gfp, unsigned int order,
+						struct mempolicy *mpol);
 #else
 static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order)
 {
 	return folio_alloc_noprof(gfp, order);
 }
+static inline struct folio *filemap_alloc_folio_mpol_noprof(gfp_t gfp,
+						unsigned int order,
+						struct mempolicy *mpol)
+{
+	return filemap_alloc_folio_noprof(gfp, order);
+}
 #endif
 
 #define filemap_alloc_folio(...)				\
 	alloc_hooks(filemap_alloc_folio_noprof(__VA_ARGS__))
+#define filemap_alloc_folio_mpol(...)				\
+	alloc_hooks(filemap_alloc_folio_mpol_noprof(__VA_ARGS__))
 
 static inline struct page *__page_cache_alloc(gfp_t gfp)
 {
@@ -652,6 +662,8 @@ static inline fgf_t fgf_set_order(size_t size)
 void *filemap_get_entry(struct address_space *mapping, pgoff_t index);
 struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
 		fgf_t fgp_flags, gfp_t gfp);
+struct folio *__filemap_get_folio_mpol(struct address_space *mapping,
+		pgoff_t index, fgf_t fgp_flags, gfp_t gfp, struct mempolicy *mpol);
 struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index,
 		fgf_t fgp_flags, gfp_t gfp);
 
@@ -710,6 +722,34 @@ static inline struct folio *filemap_grab_folio(struct address_space *mapping,
 			mapping_gfp_mask(mapping));
 }
 
+/**
+ * filemap_grab_folio_mpol - grab a folio from the page cache
+ * @mapping: The address space to search
+ * @index: The page index
+ * @mpol: The mempolicy to apply
+ *
+ * Same as filemap_grab_folio(), except that it allocates the folio using
+ * given memory policy.
+ *
+ * Return: A found or created folio. ERR_PTR(-ENOMEM) if no folio is found
+ * and failed to create a folio.
+ */
+#ifdef CONFIG_NUMA
+static inline struct folio *filemap_grab_folio_mpol(struct address_space *mapping,
+					pgoff_t index, struct mempolicy *mpol)
+{
+	return __filemap_get_folio_mpol(mapping, index,
+			FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
+			mapping_gfp_mask(mapping), mpol);
+}
+#else
+static inline struct folio *filemap_grab_folio_mpol(struct address_space *mapping,
+					pgoff_t index, struct mempolicy *mpol)
+{
+	return filemap_grab_folio(mapping, index);
+}
+#endif /* CONFIG_NUMA */
+
 /**
  * find_get_page - find and get a page reference
  * @mapping: the address_space to search
diff --git a/mm/filemap.c b/mm/filemap.c
index d62150418b91..a870a05296c8 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -990,8 +990,13 @@ int filemap_add_folio(struct address_space *mapping, struct folio *folio,
 EXPORT_SYMBOL_GPL(filemap_add_folio);
 
 #ifdef CONFIG_NUMA
-struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order)
+struct folio *filemap_alloc_folio_mpol_noprof(gfp_t gfp, unsigned int order,
+			struct mempolicy *mpol)
 {
+	if (mpol)
+		return folio_alloc_mpol_noprof(gfp, order, mpol,
+				NO_INTERLEAVE_INDEX, numa_node_id());
+
 	int n;
 	struct folio *folio;
 
@@ -1007,6 +1012,12 @@ struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order)
 	}
 	return folio_alloc_noprof(gfp, order);
 }
+EXPORT_SYMBOL(filemap_alloc_folio_mpol_noprof);
+
+struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order)
+{
+	return filemap_alloc_folio_mpol_noprof(gfp, order, NULL);
+}
 EXPORT_SYMBOL(filemap_alloc_folio_noprof);
 #endif
 
@@ -1861,11 +1872,12 @@ void *filemap_get_entry(struct address_space *mapping, pgoff_t index)
 }
 
 /**
- * __filemap_get_folio - Find and get a reference to a folio.
+ * __filemap_get_folio_mpol - Find and get a reference to a folio.
  * @mapping: The address_space to search.
  * @index: The page index.
  * @fgp_flags: %FGP flags modify how the folio is returned.
  * @gfp: Memory allocation flags to use if %FGP_CREAT is specified.
+ * @mpol: The mempolicy to apply.
  *
  * Looks up the page cache entry at @mapping & @index.
  *
@@ -1876,8 +1888,8 @@ void *filemap_get_entry(struct address_space *mapping, pgoff_t index)
  *
  * Return: The found folio or an ERR_PTR() otherwise.
  */
-struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
-		fgf_t fgp_flags, gfp_t gfp)
+struct folio *__filemap_get_folio_mpol(struct address_space *mapping, pgoff_t index,
+		fgf_t fgp_flags, gfp_t gfp, struct mempolicy *mpol)
 {
 	struct folio *folio;
 
@@ -1947,7 +1959,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
 			err = -ENOMEM;
 			if (order > 0)
 				alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
-			folio = filemap_alloc_folio(alloc_gfp, order);
+			folio = filemap_alloc_folio_mpol(alloc_gfp, order, mpol);
 			if (!folio)
 				continue;
 
@@ -1978,6 +1990,14 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
 		return ERR_PTR(-ENOENT);
 	return folio;
 }
+EXPORT_SYMBOL(__filemap_get_folio_mpol);
+
+struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
+		fgf_t fgp_flags, gfp_t gfp)
+{
+	return __filemap_get_folio_mpol(mapping, index,
+			fgp_flags, gfp, NULL);
+}
 EXPORT_SYMBOL(__filemap_get_folio);
 
 static inline struct folio *find_get_entry(struct xa_state *xas, pgoff_t max,
-- 
2.34.1



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [RFC PATCH V2 3/3] KVM: guest_memfd: Enforce NUMA mempolicy if available
  2024-09-19  9:44 [RFC PATCH V2 0/3] Add NUMA mempolicy support for KVM guest_memfd Shivank Garg
  2024-09-19  9:44 ` [RFC PATCH V2 1/3] KVM: guest_memfd: Extend creation API to support NUMA mempolicy Shivank Garg
  2024-09-19  9:44 ` [RFC PATCH V2 2/3] mm: Add mempolicy support to the filemap layer Shivank Garg
@ 2024-09-19  9:44 ` Shivank Garg
  2 siblings, 0 replies; 6+ messages in thread
From: Shivank Garg @ 2024-09-19  9:44 UTC (permalink / raw)
  To: pbonzini, corbet, akpm, willy
  Cc: acme, namhyung, mpe, isaku.yamahata, joel, kvm, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, shivankg, bharata, nikunj,
	Shivansh Dhiman

From: Shivansh Dhiman <shivansh.dhiman@amd.com>

Enforce memory policy on guest-memfd to provide proper NUMA support.
Previously, guest-memfd allocations were following local NUMA node id in
absence of process mempolicy, resulting in random memory allocation.
Moreover, it cannot use mbind() since memory isn't mapped to userspace.

To support NUMA policies, retrieve the mempolicy struct from
i_private_data part of memfd's inode. Use filemap_grab_folio_mpol() to
ensure that allocations follow the specified memory policy.

Signed-off-by: Shivansh Dhiman <shivansh.dhiman@amd.com>
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
 virt/kvm/guest_memfd.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 8f1877be4976..8553d7069ba8 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -130,12 +130,15 @@ static struct folio *__kvm_gmem_get_folio(struct inode *inode, pgoff_t index,
 					  bool allow_huge)
 {
 	struct folio *folio = NULL;
+	struct mempolicy *mpol;
 
 	if (gmem_2m_enabled && allow_huge)
 		folio = kvm_gmem_get_huge_folio(inode, index, PMD_ORDER);
 
-	if (!folio)
-		folio = filemap_grab_folio(inode->i_mapping, index);
+	if (!folio) {
+		mpol = (struct mempolicy *)(inode->i_mapping->i_private_data);
+		folio = filemap_grab_folio_mpol(inode->i_mapping, index, mpol);
+	}
 
 	pr_debug("%s: allocate folio with PFN %lx order %d\n",
 		 __func__, folio_pfn(folio), folio_order(folio));
-- 
2.34.1



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH V2 1/3] KVM: guest_memfd: Extend creation API to support NUMA mempolicy
  2024-09-19  9:44 ` [RFC PATCH V2 1/3] KVM: guest_memfd: Extend creation API to support NUMA mempolicy Shivank Garg
@ 2024-09-23  8:01   ` Chao Gao
  2024-09-24  9:42     ` Shivank Garg
  0 siblings, 1 reply; 6+ messages in thread
From: Chao Gao @ 2024-09-23  8:01 UTC (permalink / raw)
  To: Shivank Garg
  Cc: pbonzini, corbet, akpm, willy, acme, namhyung, mpe,
	isaku.yamahata, joel, kvm, linux-doc, linux-kernel, linux-mm,
	linux-fsdevel, bharata, nikunj, Shivansh Dhiman

On Thu, Sep 19, 2024 at 09:44:36AM +0000, Shivank Garg wrote:
>From: Shivansh Dhiman <shivansh.dhiman@amd.com>
>
>Extend the API of creating guest-memfd to introduce proper NUMA support,
>allowing VMM to set memory policies effectively. The memory policy defines
>from which node memory is allocated.
>
>The current implementation of KVM guest-memfd does not honor the settings
>provided by VMM. While mbind() can be used for NUMA policy support in
>userspace applications, it is not functional for guest-memfd as the memory
>is not mapped to userspace.
>
>Currently, SEV-SNP guest use guest-memfd as a memory backend and would
>benefit from NUMA support. It enables fine-grained control over memory
>allocation, optimizing performance for specific workload requirements.
>
>To apply memory policy on a guest-memfd, extend the KVM_CREATE_GUEST_MEMFD
>IOCTL with additional fields related to mempolicy.
>- mpol_mode represents the policy mode (default, bind, interleave, or
>  preferred).
>- host_nodes_addr denotes the userspace address of the nodemask, a bit
>  mask of nodes containing up to maxnode bits.
>- First bit of flags must be set to use mempolicy.

Do you need a way for the userspace to enumerate supported flags?

The direction was to implement a fbind() syscall [1]. I am not sure if it has
changed. What are the benefits of this proposal compared to the fbind() syscall?

I believe one limitation of this proposal is that the policy must be set during
the creation of the guest-memfd. i.e., the policy cannot be changed at runtime.
is it a practical problem?

[1]: https://lore.kernel.org/kvm/ZOjpIL0SFH+E3Dj4@google.com/

>
>Store the mempolicy struct in i_private_data of the memfd's inode, which
>is currently unused in the context of guest-memfd.
>
>Signed-off-by: Shivansh Dhiman <shivansh.dhiman@amd.com>
>Signed-off-by: Shivank Garg <shivankg@amd.com>
>---
> Documentation/virt/kvm/api.rst | 13 ++++++++-
> include/linux/mempolicy.h      |  4 +++
> include/uapi/linux/kvm.h       |  5 +++-
> mm/mempolicy.c                 | 52 ++++++++++++++++++++++++++++++++++
> tools/include/uapi/linux/kvm.h |  5 +++-
> virt/kvm/guest_memfd.c         | 21 ++++++++++++--
> virt/kvm/kvm_mm.h              |  3 ++
> 7 files changed, 97 insertions(+), 6 deletions(-)
>
>diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
>index b3be87489108..dcb61282c773 100644
>--- a/Documentation/virt/kvm/api.rst
>+++ b/Documentation/virt/kvm/api.rst
>@@ -6346,7 +6346,10 @@ and cannot be resized  (guest_memfd files do however support PUNCH_HOLE).
>   struct kvm_create_guest_memfd {
> 	__u64 size;
> 	__u64 flags;
>-	__u64 reserved[6];
>+	__u64 host_nodes_addr;
>+	__u16 maxnode;
>+	__u8 mpol_mode;
>+	__u8 reserved[37];
>   };
> 
> Conceptually, the inode backing a guest_memfd file represents physical memory,
>@@ -6367,6 +6370,14 @@ a single guest_memfd file, but the bound ranges must not overlap).
> 
> See KVM_SET_USER_MEMORY_REGION2 for additional details.
> 
>+NUMA memory policy support for KVM guest_memfd allows the host to specify
>+memory allocation behavior for guest NUMA nodes, similar to mbind(). If
>+KVM_GUEST_MEMFD_NUMA_ENABLE flag is set, memory allocations from the guest
>+will use the specified policy and host-nodes for physical memory.
>+- mpol_mode refers to the policy mode: default, preferred, bind, interleave, or
>+  preferred.
>+- host_nodes_addr points to bitmask of nodes containing up to maxnode bits.
>+
> 4.143 KVM_PRE_FAULT_MEMORY
> ---------------------------
> 
>diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
>index 1add16f21612..468eeda2ec2f 100644
>--- a/include/linux/mempolicy.h
>+++ b/include/linux/mempolicy.h
>@@ -299,4 +299,8 @@ static inline bool mpol_is_preferred_many(struct mempolicy *pol)
> }
> 
> #endif /* CONFIG_NUMA */
>+
>+struct mempolicy *create_mpol_from_args(unsigned char mode,
>+					const unsigned long __user *nmask,
>+					unsigned short maxnode);
> #endif
>diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
>index 637efc055145..fda6cbef0a1d 100644
>--- a/include/uapi/linux/kvm.h
>+++ b/include/uapi/linux/kvm.h
>@@ -1561,7 +1561,10 @@ struct kvm_memory_attributes {
> struct kvm_create_guest_memfd {
> 	__u64 size;
> 	__u64 flags;
>-	__u64 reserved[6];
>+	__u64 host_nodes_addr;
>+	__u16 maxnode;
>+	__u8 mpol_mode;
>+	__u8 reserved[37];
> };
> 
> #define KVM_PRE_FAULT_MEMORY	_IOWR(KVMIO, 0xd5, struct kvm_pre_fault_memory)
>diff --git a/mm/mempolicy.c b/mm/mempolicy.c
>index b858e22b259d..9e9450433fcc 100644
>--- a/mm/mempolicy.c
>+++ b/mm/mempolicy.c
>@@ -3557,3 +3557,55 @@ static int __init mempolicy_sysfs_init(void)
> 
> late_initcall(mempolicy_sysfs_init);
> #endif /* CONFIG_SYSFS */
>+
>+#ifdef CONFIG_KVM_PRIVATE_MEM
>+/**
>+ * create_mpol_from_args - create a mempolicy structure from args
>+ * @mode:  NUMA memory policy mode
>+ * @nmask:  bitmask of NUMA nodes
>+ * @maxnode:  number of bits in the nodes bitmask
>+ *
>+ * Create a mempolicy from given nodemask and memory policy such as
>+ * default, preferred, interleave or bind.
>+ *
>+ * Return: error encoded in a pointer or memory policy on success.
>+ */
>+struct mempolicy *create_mpol_from_args(unsigned char mode,
>+					const unsigned long __user *nmask,
>+					unsigned short maxnode)
>+{
>+	struct mm_struct *mm = current->mm;
>+	unsigned short mode_flags;
>+	struct mempolicy *mpol;
>+	nodemask_t nodes;
>+	int lmode = mode;
>+	int err = -ENOMEM;
>+
>+	err = sanitize_mpol_flags(&lmode, &mode_flags);
>+	if (err)
>+		return ERR_PTR(err);
>+
>+	err = get_nodes(&nodes, nmask, maxnode);
>+	if (err)
>+		return ERR_PTR(err);
>+
>+	mpol = mpol_new(mode, mode_flags, &nodes);
>+	if (IS_ERR_OR_NULL(mpol))
>+		return mpol;
>+
>+	NODEMASK_SCRATCH(scratch);
>+	if (!scratch)
>+		return ERR_PTR(-ENOMEM);
>+
>+	mmap_write_lock(mm);
>+	err = mpol_set_nodemask(mpol, &nodes, scratch);
>+	mmap_write_unlock(mm);
>+	NODEMASK_SCRATCH_FREE(scratch);
>+
>+	if (err)
>+		return ERR_PTR(err);
>+
>+	return mpol;
>+}
>+EXPORT_SYMBOL(create_mpol_from_args);
>+#endif
>diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h
>index e5af8c692dc0..e3effcd1e358 100644
>--- a/tools/include/uapi/linux/kvm.h
>+++ b/tools/include/uapi/linux/kvm.h
>@@ -1546,7 +1546,10 @@ struct kvm_memory_attributes {
> struct kvm_create_guest_memfd {
> 	__u64 size;
> 	__u64 flags;
>-	__u64 reserved[6];
>+	__u64 host_nodes_addr;
>+	__u16 maxnode;
>+	__u8 mpol_mode;
>+	__u8 reserved[37];
> };
> 
> #define KVM_PRE_FAULT_MEMORY	_IOWR(KVMIO, 0xd5, struct kvm_pre_fault_memory)
>diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
>index e930014b4bdc..8f1877be4976 100644
>--- a/virt/kvm/guest_memfd.c
>+++ b/virt/kvm/guest_memfd.c
>@@ -4,6 +4,7 @@
> #include <linux/kvm_host.h>
> #include <linux/pagemap.h>
> #include <linux/anon_inodes.h>
>+#include <linux/mempolicy.h>
> 
> #include "kvm_mm.h"
> 
>@@ -445,7 +446,8 @@ static const struct inode_operations kvm_gmem_iops = {
> 	.setattr	= kvm_gmem_setattr,
> };
> 
>-static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
>+static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags,
>+			     struct mempolicy *pol)
> {
> 	const char *anon_name = "[kvm-gmem]";
> 	struct kvm_gmem *gmem;
>@@ -478,6 +480,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
> 	inode->i_private = (void *)(unsigned long)flags;
> 	inode->i_op = &kvm_gmem_iops;
> 	inode->i_mapping->a_ops = &kvm_gmem_aops;
>+	inode->i_mapping->i_private_data = (void *)pol;
> 	inode->i_mode |= S_IFREG;
> 	inode->i_size = size;
> 	mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
>@@ -505,7 +508,8 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
> {
> 	loff_t size = args->size;
> 	u64 flags = args->flags;
>-	u64 valid_flags = 0;
>+	u64 valid_flags = GUEST_MEMFD_NUMA_ENABLE;
>+	struct mempolicy *mpol = NULL;
> 
> 	if (flags & ~valid_flags)
> 		return -EINVAL;
>@@ -513,7 +517,18 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
> 	if (size <= 0 || !PAGE_ALIGNED(size))
> 		return -EINVAL;
> 
>-	return __kvm_gmem_create(kvm, size, flags);
>+	if (flags & GUEST_MEMFD_NUMA_ENABLE) {
>+		unsigned char mode = args->mpol_mode;
>+		unsigned short maxnode = args->maxnode;
>+		const unsigned long __user *user_nmask =
>+				(const unsigned long *)args->host_nodes_addr;
>+
>+		mpol = create_mpol_from_args(mode, user_nmask, maxnode);
>+		if (IS_ERR_OR_NULL(mpol))
>+			return PTR_ERR(mpol);
>+	}
>+
>+	return __kvm_gmem_create(kvm, size, flags, mpol);
> }
> 
> int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
>diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h
>index 715f19669d01..3dd8495ae03d 100644
>--- a/virt/kvm/kvm_mm.h
>+++ b/virt/kvm/kvm_mm.h
>@@ -36,6 +36,9 @@ static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,
> #endif /* HAVE_KVM_PFNCACHE */
> 
> #ifdef CONFIG_KVM_PRIVATE_MEM
>+/* Flag to check NUMA policy while creating KVM guest-memfd. */
>+#define GUEST_MEMFD_NUMA_ENABLE BIT_ULL(0)
>+
> void kvm_gmem_init(struct module *module);
> int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args);
> int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
>-- 
>2.34.1
>
>


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH V2 1/3] KVM: guest_memfd: Extend creation API to support NUMA mempolicy
  2024-09-23  8:01   ` Chao Gao
@ 2024-09-24  9:42     ` Shivank Garg
  0 siblings, 0 replies; 6+ messages in thread
From: Shivank Garg @ 2024-09-24  9:42 UTC (permalink / raw)
  To: Chao Gao
  Cc: pbonzini, corbet, akpm, willy, acme, namhyung, mpe,
	isaku.yamahata, joel, kvm, linux-doc, linux-kernel, linux-mm,
	linux-fsdevel, bharata, nikunj, Sean Christopherson



On 9/23/2024 1:31 PM, Chao Gao wrote:
> On Thu, Sep 19, 2024 at 09:44:36AM +0000, Shivank Garg wrote:

> 
> Do you need a way for the userspace to enumerate supported flags?
> 

If the flag is not supported, the VMM will get a -EINVAL during memfd creation.

> The direction was to implement a fbind() syscall [1]. I am not sure if it has
> changed. What are the benefits of this proposal compared to the fbind() syscall?
> 
> I believe one limitation of this proposal is that the policy must be set during
> the creation of the guest-memfd. i.e., the policy cannot be changed at runtime.
> is it a practical problem?
> 
> [1]: https://lore.kernel.org/kvm/ZOjpIL0SFH+E3Dj4@google.com/
> 
As the folio allocation happen via guest_memfd, this was an interesting idea for us
to implement. And the mempolicy can be contained in guest_memfd.

For changing policy at runtime, IOCTL KVM_GUEST_MEMFD_BIND can be proposed. However,
I don't seem to find the support on KVM/QEMU for changing the memory binding at runtime.

The fbind approach may pose a challenge for storing the mempolicy. Using "private" data
fields of struct file or struct inode, can conflict with other user of those structures.
And some way to inform (a new flag perhaps) the subsystem about private data is required for
fbind purpose.

Thanks,
Shivank 


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2024-09-24  9:42 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-09-19  9:44 [RFC PATCH V2 0/3] Add NUMA mempolicy support for KVM guest_memfd Shivank Garg
2024-09-19  9:44 ` [RFC PATCH V2 1/3] KVM: guest_memfd: Extend creation API to support NUMA mempolicy Shivank Garg
2024-09-23  8:01   ` Chao Gao
2024-09-24  9:42     ` Shivank Garg
2024-09-19  9:44 ` [RFC PATCH V2 2/3] mm: Add mempolicy support to the filemap layer Shivank Garg
2024-09-19  9:44 ` [RFC PATCH V2 3/3] KVM: guest_memfd: Enforce NUMA mempolicy if available Shivank Garg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox