From: Peter Xu <peterx@redhat.com>
To: kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org
Cc: Jason Gunthorpe <jgg@nvidia.com>, Nico Pache <npache@redhat.com>,
Zi Yan <ziy@nvidia.com>, Alex Mastro <amastro@fb.com>,
David Hildenbrand <david@redhat.com>,
Alex Williamson <alex@shazbot.org>, Zhi Wang <zhiw@nvidia.com>,
David Laight <david.laight.linux@gmail.com>,
Yi Liu <yi.l.liu@intel.com>, Ankit Agrawal <ankita@nvidia.com>,
peterx@redhat.com, Kevin Tian <kevin.tian@intel.com>,
Andrew Morton <akpm@linux-foundation.org>
Subject: [PATCH v2 1/4] mm/thp: Allow thp_get_unmapped_area_vmflags() to take alignment
Date: Thu, 4 Dec 2025 10:10:00 -0500 [thread overview]
Message-ID: <20251204151003.171039-2-peterx@redhat.com> (raw)
In-Reply-To: <20251204151003.171039-1-peterx@redhat.com>
Add "align" parameter to thp_get_unmapped_area_vmflags() so that it allows
get unmapped area with any alignment.
There're two existing callers, use PMD_SIZE explicitly for them.
No functional change intended.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
include/linux/huge_mm.h | 5 +++--
mm/huge_memory.c | 7 ++++---
mm/mmap.c | 3 ++-
3 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 71ac78b9f834f..1c221550362d7 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -362,7 +362,7 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags);
unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags,
- vm_flags_t vm_flags);
+ unsigned long align, vm_flags_t vm_flags);
bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins);
int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
@@ -559,7 +559,8 @@ static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
static inline unsigned long
thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff,
- unsigned long flags, vm_flags_t vm_flags)
+ unsigned long flags, unsigned long align,
+ vm_flags_t vm_flags)
{
return 0;
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 6cba1cb14b23a..ab2450b985171 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1155,12 +1155,12 @@ static unsigned long __thp_get_unmapped_area(struct file *filp,
unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags,
- vm_flags_t vm_flags)
+ unsigned long align, vm_flags_t vm_flags)
{
unsigned long ret;
loff_t off = (loff_t)pgoff << PAGE_SHIFT;
- ret = __thp_get_unmapped_area(filp, addr, len, off, flags, PMD_SIZE, vm_flags);
+ ret = __thp_get_unmapped_area(filp, addr, len, off, flags, align, vm_flags);
if (ret)
return ret;
@@ -1171,7 +1171,8 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add
unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags)
{
- return thp_get_unmapped_area_vmflags(filp, addr, len, pgoff, flags, 0);
+ return thp_get_unmapped_area_vmflags(filp, addr, len, pgoff, flags,
+ PMD_SIZE, 0);
}
EXPORT_SYMBOL_GPL(thp_get_unmapped_area);
diff --git a/mm/mmap.c b/mm/mmap.c
index 5fd3b80fda1d5..8fa397a18252e 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -846,7 +846,8 @@ __get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
&& IS_ALIGNED(len, PMD_SIZE)) {
/* Ensures that larger anonymous mappings are THP aligned. */
addr = thp_get_unmapped_area_vmflags(file, addr, len,
- pgoff, flags, vm_flags);
+ pgoff, flags, PMD_SIZE,
+ vm_flags);
} else {
addr = mm_get_unmapped_area_vmflags(current->mm, file, addr, len,
pgoff, flags, vm_flags);
--
2.50.1
next prev parent reply other threads:[~2025-12-04 15:10 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-04 15:09 [PATCH v2 0/4] mm/vfio: huge pfnmaps with !MAP_FIXED mappings Peter Xu
2025-12-04 15:10 ` Peter Xu [this message]
2025-12-04 15:10 ` [PATCH v2 2/4] mm: Add file_operations.get_mapping_order() Peter Xu
2025-12-04 15:19 ` Peter Xu
2025-12-04 15:10 ` [PATCH v2 3/4] vfio: Introduce vfio_device_ops.get_mapping_order hook Peter Xu
2025-12-04 15:10 ` [PATCH v2 4/4] vfio-pci: Best-effort huge pfnmaps with !MAP_FIXED mappings Peter Xu
2025-12-05 4:33 ` kernel test robot
2025-12-05 7:45 ` kernel test robot
2025-12-04 18:16 ` [PATCH v2 0/4] mm/vfio: " Cédric Le Goater
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251204151003.171039-2-peterx@redhat.com \
--to=peterx@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=alex@shazbot.org \
--cc=amastro@fb.com \
--cc=ankita@nvidia.com \
--cc=david.laight.linux@gmail.com \
--cc=david@redhat.com \
--cc=jgg@nvidia.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=npache@redhat.com \
--cc=yi.l.liu@intel.com \
--cc=zhiw@nvidia.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox