linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint()
@ 2024-12-10  2:41 Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 01/17] mm: Introduce generic_mmap_hint() Kalesh Singh
                   ` (16 more replies)
  0 siblings, 17 replies; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

Hi all,

This series introduces aarch_mmap_hint() to handle allocating VA space
for the hint address.

Patches 1-16 introduce this new helper and Patch 17 uses it to fix the
issue of mmap hint being ignored in some cases due to THP alignment [1]

[1] https://lore.kernel.org/r/20241118214650.3667577-1-kaleshsingh@google.com/

Thanks,
Kalesh

Kalesh Singh (17):
  mm: Introduce generic_mmap_hint()
  mm: x86: Introduce arch_mmap_hint()
  mm: arm: Introduce arch_mmap_hint()
  mm: alpha: Introduce arch_mmap_hint()
  mm: arc: Use generic_mmap_hint()
  mm: csky: Introduce arch_mmap_hint()
  mm: loongarch: Introduce arch_mmap_hint()
  mm: mips: Introduce arch_align_mmap_hint()
  mm: parisc: Introduce arch_align_mmap_hint()
  mm: s390: Introduce arch_mmap_hint()
  mm: sh: Introduce arch_mmap_hint()
  mm: sparc32: Introduce arch_mmap_hint()
  mm: sparc64: Introduce arch_mmap_hint()
  mm: xtensa: Introduce arch_mmap_hint()
  mm: powerpc: Introduce arch_mmap_hint()
  mm: Fallback to generic_mmap_hint()
  mm: Respect mmap hint before THP alignment if allocation is possible

 arch/alpha/include/asm/pgtable.h           |  1 +
 arch/alpha/kernel/osf_sys.c                | 29 ++++++++++--
 arch/arc/mm/mmap.c                         | 12 ++---
 arch/arm/include/asm/pgtable.h             |  1 +
 arch/arm/mm/mmap.c                         | 54 ++++++++++++---------
 arch/csky/abiv1/inc/abi/pgtable-bits.h     |  1 +
 arch/csky/abiv1/mmap.c                     | 38 ++++++++++-----
 arch/loongarch/include/asm/pgtable.h       |  1 +
 arch/loongarch/mm/mmap.c                   | 40 ++++++++++------
 arch/mips/include/asm/pgtable.h            |  1 +
 arch/mips/mm/mmap.c                        | 39 +++++++++------
 arch/parisc/include/asm/pgtable.h          |  1 +
 arch/parisc/kernel/sys_parisc.c            | 37 ++++++++++-----
 arch/powerpc/include/asm/book3s/64/slice.h |  1 +
 arch/powerpc/mm/book3s64/slice.c           | 31 ++++++++++++
 arch/s390/include/asm/pgtable.h            |  1 +
 arch/s390/mm/mmap.c                        | 32 ++++++-------
 arch/sh/include/asm/pgtable.h              |  1 +
 arch/sh/mm/mmap.c                          | 48 ++++++++++---------
 arch/sparc/include/asm/pgtable_32.h        |  1 +
 arch/sparc/include/asm/pgtable_64.h        |  1 +
 arch/sparc/kernel/sys_sparc_32.c           | 21 +++++++--
 arch/sparc/kernel/sys_sparc_64.c           | 47 +++++++++++++-----
 arch/x86/include/asm/pgtable_64.h          |  1 +
 arch/x86/kernel/sys_x86_64.c               | 49 ++++++++++---------
 arch/xtensa/include/asm/pgtable.h          |  1 +
 arch/xtensa/kernel/syscall.c               | 26 +++++++++-
 include/linux/sched/mm.h                   |  8 ++++
 mm/huge_memory.c                           | 15 +++---
 mm/mmap.c                                  | 55 ++++++++++++++--------
 30 files changed, 401 insertions(+), 193 deletions(-)

-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH mm-unstable 01/17] mm: Introduce generic_mmap_hint()
  2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
@ 2024-12-10  2:41 ` Kalesh Singh
  2024-12-10  3:27   ` Yang Shi
  2024-12-10  2:41 ` [PATCH mm-unstable 02/17] mm: x86: Introduce arch_mmap_hint() Kalesh Singh
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

Consolidate the hint searches from both direcitons (topdown and
bottomup) into generic_mmap_hint().

No functional change is introduced.

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 include/linux/sched/mm.h |  4 ++++
 mm/mmap.c                | 45 ++++++++++++++++++++++++----------------
 2 files changed, 31 insertions(+), 18 deletions(-)

diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
index 928a626725e6..edeec19d1708 100644
--- a/include/linux/sched/mm.h
+++ b/include/linux/sched/mm.h
@@ -201,6 +201,10 @@ unsigned long mm_get_unmapped_area_vmflags(struct mm_struct *mm,
 					   unsigned long flags,
 					   vm_flags_t vm_flags);
 
+unsigned long generic_mmap_hint(struct file *filp, unsigned long addr,
+				unsigned long len, unsigned long pgoff,
+				unsigned long flags);
+
 unsigned long
 generic_get_unmapped_area(struct file *filp, unsigned long addr,
 			  unsigned long len, unsigned long pgoff,
diff --git a/mm/mmap.c b/mm/mmap.c
index df9154b15ef9..e97eb8bf4889 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -620,6 +620,27 @@ unsigned long vm_unmapped_area(struct vm_unmapped_area_info *info)
 	return addr;
 }
 
+unsigned long generic_mmap_hint(struct file *filp, unsigned long addr,
+				unsigned long len, unsigned long pgoff,
+				unsigned long flags)
+{
+	struct mm_struct *mm = current->mm;
+	struct vm_area_struct *vma, *prev;
+	const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags);
+
+	if (!addr)
+		return 0;
+
+	addr = PAGE_ALIGN(addr);
+	vma = find_vma_prev(mm, addr, &prev);
+	if (mmap_end - len >= addr && addr >= mmap_min_addr &&
+	    (!vma || addr + len <= vm_start_gap(vma)) &&
+	    (!prev || addr >= vm_end_gap(prev)))
+		return addr;
+
+	return 0;
+}
+
 /* Get an address range which is currently unmapped.
  * For shmat() with addr=0.
  *
@@ -637,7 +658,6 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr,
 			  unsigned long flags, vm_flags_t vm_flags)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma, *prev;
 	struct vm_unmapped_area_info info = {};
 	const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags);
 
@@ -647,14 +667,9 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr,
 	if (flags & MAP_FIXED)
 		return addr;
 
-	if (addr) {
-		addr = PAGE_ALIGN(addr);
-		vma = find_vma_prev(mm, addr, &prev);
-		if (mmap_end - len >= addr && addr >= mmap_min_addr &&
-		    (!vma || addr + len <= vm_start_gap(vma)) &&
-		    (!prev || addr >= vm_end_gap(prev)))
-			return addr;
-	}
+	addr = generic_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		return addr;
 
 	info.length = len;
 	info.low_limit = mm->mmap_base;
@@ -685,7 +700,6 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
 				  unsigned long len, unsigned long pgoff,
 				  unsigned long flags, vm_flags_t vm_flags)
 {
-	struct vm_area_struct *vma, *prev;
 	struct mm_struct *mm = current->mm;
 	struct vm_unmapped_area_info info = {};
 	const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags);
@@ -698,14 +712,9 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
 		return addr;
 
 	/* requesting a specific address */
-	if (addr) {
-		addr = PAGE_ALIGN(addr);
-		vma = find_vma_prev(mm, addr, &prev);
-		if (mmap_end - len >= addr && addr >= mmap_min_addr &&
-				(!vma || addr + len <= vm_start_gap(vma)) &&
-				(!prev || addr >= vm_end_gap(prev)))
-			return addr;
-	}
+	addr = generic_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		return addr;
 
 	info.flags = VM_UNMAPPED_AREA_TOPDOWN;
 	info.length = len;
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH mm-unstable 02/17] mm: x86: Introduce arch_mmap_hint()
  2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 01/17] mm: Introduce generic_mmap_hint() Kalesh Singh
@ 2024-12-10  2:41 ` Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 03/17] mm: arm: " Kalesh Singh
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

Introduce x86 arch_mmap_hint() and define HAVE_ARCH_MMAP_HINT.
This is a preparatory patch, no functional change is introduced.

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 arch/x86/include/asm/pgtable_64.h |  1 +
 arch/x86/kernel/sys_x86_64.c      | 49 ++++++++++++++++++-------------
 include/linux/sched/mm.h          |  4 +++
 3 files changed, 33 insertions(+), 21 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
index d1426b64c1b9..4472fd0040c3 100644
--- a/arch/x86/include/asm/pgtable_64.h
+++ b/arch/x86/include/asm/pgtable_64.h
@@ -245,6 +245,7 @@ extern void cleanup_highmap(void);
 
 #define HAVE_ARCH_UNMAPPED_AREA
 #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
+#define HAVE_ARCH_MMAP_HINT
 
 #define PAGE_AGP    PAGE_KERNEL_NOCACHE
 #define HAVE_PAGE_AGP 1
diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index 776ae6fa7f2d..95a39ef915b7 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -123,12 +123,32 @@ static inline unsigned long stack_guard_placement(vm_flags_t vm_flags)
 	return 0;
 }
 
+unsigned long arch_mmap_hint(struct file *filp, unsigned long addr,
+			     unsigned long len, unsigned long pgoff,
+			     unsigned long flags)
+{
+	unsigned long begin, end;
+
+	if (!addr)
+		return 0;
+
+	find_start_end(addr, flags, &begin, &end);
+
+	addr = PAGE_ALIGN(addr);
+
+	if (!mmap_address_hint_valid(addr, len))
+		return 0;
+
+	if (len > end)
+		return 0;
+
+	return generic_mmap_hint(filp, addr, len, pgoff, flags);
+}
+
 unsigned long
 arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len,
 		       unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags)
 {
-	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
 	struct vm_unmapped_area_info info = {};
 	unsigned long begin, end;
 
@@ -140,13 +160,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len,
 	if (len > end)
 		return -ENOMEM;
 
-	if (addr) {
-		addr = PAGE_ALIGN(addr);
-		vma = find_vma(mm, addr);
-		if (end - len >= addr &&
-		    (!vma || addr + len <= vm_start_gap(vma)))
-			return addr;
-	}
+	addr = arch_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		return addr;
 
 	info.length = len;
 	info.low_limit = begin;
@@ -168,8 +184,6 @@ arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr0,
 			  unsigned long len, unsigned long pgoff,
 			  unsigned long flags, vm_flags_t vm_flags)
 {
-	struct vm_area_struct *vma;
-	struct mm_struct *mm = current->mm;
 	unsigned long addr = addr0;
 	struct vm_unmapped_area_info info = {};
 
@@ -186,16 +200,9 @@ arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr0,
 		goto bottomup;
 
 	/* requesting a specific address */
-	if (addr) {
-		addr &= PAGE_MASK;
-		if (!mmap_address_hint_valid(addr, len))
-			goto get_unmapped_area;
-
-		vma = find_vma(mm, addr);
-		if (!vma || addr + len <= vm_start_gap(vma))
-			return addr;
-	}
-get_unmapped_area:
+	addr = arch_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		return addr;
 
 	info.flags = VM_UNMAPPED_AREA_TOPDOWN;
 	info.length = len;
diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
index edeec19d1708..f12d094649f7 100644
--- a/include/linux/sched/mm.h
+++ b/include/linux/sched/mm.h
@@ -205,6 +205,10 @@ unsigned long generic_mmap_hint(struct file *filp, unsigned long addr,
 				unsigned long len, unsigned long pgoff,
 				unsigned long flags);
 
+unsigned long arch_mmap_hint(struct file *filp, unsigned long addr,
+			     unsigned long len, unsigned long pgoff,
+			     unsigned long flags);
+
 unsigned long
 generic_get_unmapped_area(struct file *filp, unsigned long addr,
 			  unsigned long len, unsigned long pgoff,
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH mm-unstable 03/17] mm: arm: Introduce arch_mmap_hint()
  2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 01/17] mm: Introduce generic_mmap_hint() Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 02/17] mm: x86: Introduce arch_mmap_hint() Kalesh Singh
@ 2024-12-10  2:41 ` Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 04/17] mm: alpha: " Kalesh Singh
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

Introduce arm arch_mmap_hint() and define HAVE_ARCH_MMAP_HINT.
This is a preparatory patch, no functional change is introduced.

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 arch/arm/include/asm/pgtable.h |  1 +
 arch/arm/mm/mmap.c             | 54 +++++++++++++++++++---------------
 2 files changed, 32 insertions(+), 23 deletions(-)

diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index be91e376df79..1433b3ff4caa 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -330,6 +330,7 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte)
  */
 #define HAVE_ARCH_UNMAPPED_AREA
 #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
+#define HAVE_ARCH_MMAP_HINT
 
 #endif /* !__ASSEMBLY__ */
 
diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
index 3dbb383c26d5..c415410eb64a 100644
--- a/arch/arm/mm/mmap.c
+++ b/arch/arm/mm/mmap.c
@@ -17,6 +17,31 @@
 	((((addr)+SHMLBA-1)&~(SHMLBA-1)) +	\
 	 (((pgoff)<<PAGE_SHIFT) & (SHMLBA-1)))
 
+unsigned long arch_mmap_hint(struct file *filp, unsigned long addr,
+			     unsigned long len, unsigned long pgoff,
+			     unsigned long flags)
+{
+	int aliasing = cache_is_vipt_aliasing();
+	int do_align = 0;
+
+	if (!addr)
+		return 0;
+
+	/*
+	 * We only need to do colour alignment if either the I or D
+	 * caches alias.
+	 */
+	if (aliasing)
+		do_align = filp || (flags & MAP_SHARED);
+
+	if (do_align)
+		addr = COLOUR_ALIGN(addr, pgoff);
+	else
+		addr = PAGE_ALIGN(addr);
+
+	return generic_mmap_hint(filp, addr, len, pgoff, flags);
+}
+
 /*
  * We need to ensure that shared mappings are correctly aligned to
  * avoid aliasing issues with VIPT caches.  We need to ensure that
@@ -32,7 +57,6 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 		unsigned long flags, vm_flags_t vm_flags)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
 	int do_align = 0;
 	int aliasing = cache_is_vipt_aliasing();
 	struct vm_unmapped_area_info info = {};
@@ -57,17 +81,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 	if (len > TASK_SIZE)
 		return -ENOMEM;
 
-	if (addr) {
-		if (do_align)
-			addr = COLOUR_ALIGN(addr, pgoff);
-		else
-			addr = PAGE_ALIGN(addr);
-
-		vma = find_vma(mm, addr);
-		if (TASK_SIZE - len >= addr &&
-		    (!vma || addr + len <= vm_start_gap(vma)))
-			return addr;
-	}
+	addr = arch_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		return addr;
 
 	info.length = len;
 	info.low_limit = mm->mmap_base;
@@ -82,7 +98,6 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 		        const unsigned long len, const unsigned long pgoff,
 		        const unsigned long flags, vm_flags_t vm_flags)
 {
-	struct vm_area_struct *vma;
 	struct mm_struct *mm = current->mm;
 	unsigned long addr = addr0;
 	int do_align = 0;
@@ -108,16 +123,9 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	}
 
 	/* requesting a specific address */
-	if (addr) {
-		if (do_align)
-			addr = COLOUR_ALIGN(addr, pgoff);
-		else
-			addr = PAGE_ALIGN(addr);
-		vma = find_vma(mm, addr);
-		if (TASK_SIZE - len >= addr &&
-				(!vma || addr + len <= vm_start_gap(vma)))
-			return addr;
-	}
+	addr = arch_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		return addr;
 
 	info.flags = VM_UNMAPPED_AREA_TOPDOWN;
 	info.length = len;
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH mm-unstable 04/17] mm: alpha: Introduce arch_mmap_hint()
  2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
                   ` (2 preceding siblings ...)
  2024-12-10  2:41 ` [PATCH mm-unstable 03/17] mm: arm: " Kalesh Singh
@ 2024-12-10  2:41 ` Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 05/17] mm: arc: Use generic_mmap_hint() Kalesh Singh
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

Introduce alpha arch_mmap_hint() and define HAVE_ARCH_MMAP_HINT.
This is a preparatory patch, no functional change is introduced.

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 arch/alpha/include/asm/pgtable.h |  1 +
 arch/alpha/kernel/osf_sys.c      | 29 ++++++++++++++++++++++++-----
 2 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h
index 635f0a5f5bbd..372885a01abd 100644
--- a/arch/alpha/include/asm/pgtable.h
+++ b/arch/alpha/include/asm/pgtable.h
@@ -362,5 +362,6 @@ extern void paging_init(void);
 
 /* We have our own get_unmapped_area to cope with ADDR_LIMIT_32BIT.  */
 #define HAVE_ARCH_UNMAPPED_AREA
+#define HAVE_ARCH_MMAP_HINT
 
 #endif /* _ALPHA_PGTABLE_H */
diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c
index 86185021f75a..68e61dece537 100644
--- a/arch/alpha/kernel/osf_sys.c
+++ b/arch/alpha/kernel/osf_sys.c
@@ -1225,6 +1225,27 @@ arch_get_unmapped_area_1(unsigned long addr, unsigned long len,
 	return vm_unmapped_area(&info);
 }
 
+unsigned long arch_mmap_hint(struct file *filp, unsigned long addr,
+			     unsigned long len, unsigned long pgoff,
+			     unsigned long flags)
+{
+	unsigned long limit;
+
+	if (!addr)
+		return 0;
+
+	/* "32 bit" actually means 31 bit, since pointers sign extend.  */
+	if (current->personality & ADDR_LIMIT_32BIT)
+		limit = 0x80000000;
+	else
+		limit = TASK_SIZE;
+
+	if (limit - len < addr)
+		return 0;
+
+	return generic_mmap_hint(filp, addr, len, pgoff, flags);
+}
+
 unsigned long
 arch_get_unmapped_area(struct file *filp, unsigned long addr,
 		       unsigned long len, unsigned long pgoff,
@@ -1254,11 +1275,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 	   merely specific addresses, but regions of memory -- perhaps
 	   this feature should be incorporated into all ports?  */
 
-	if (addr) {
-		addr = arch_get_unmapped_area_1 (PAGE_ALIGN(addr), len, limit);
-		if (addr != (unsigned long) -ENOMEM)
-			return addr;
-	}
+	addr = arch_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		return hint;
 
 	/* Next, try allocating at TASK_UNMAPPED_BASE.  */
 	addr = arch_get_unmapped_area_1 (PAGE_ALIGN(TASK_UNMAPPED_BASE),
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH mm-unstable 05/17] mm: arc: Use generic_mmap_hint()
  2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
                   ` (3 preceding siblings ...)
  2024-12-10  2:41 ` [PATCH mm-unstable 04/17] mm: alpha: " Kalesh Singh
@ 2024-12-10  2:41 ` Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 06/17] mm: csky: Introduce arch_mmap_hint() Kalesh Singh
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

Use generic_mmap_hint() in arch arch_get_unmapped_area().

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 arch/arc/mm/mmap.c | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/arch/arc/mm/mmap.c b/arch/arc/mm/mmap.c
index 2185afe8d59f..6b1fcea06779 100644
--- a/arch/arc/mm/mmap.c
+++ b/arch/arc/mm/mmap.c
@@ -27,7 +27,6 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 		unsigned long flags, vm_flags_t vm_flags)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
 	struct vm_unmapped_area_info info = {};
 
 	/*
@@ -43,14 +42,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 	if (len > TASK_SIZE)
 		return -ENOMEM;
 
-	if (addr) {
-		addr = PAGE_ALIGN(addr);
-
-		vma = find_vma(mm, addr);
-		if (TASK_SIZE - len >= addr &&
-		    (!vma || addr + len <= vm_start_gap(vma)))
-			return addr;
-	}
+	addr = generic_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		return addr;
 
 	info.length = len;
 	info.low_limit = mm->mmap_base;
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH mm-unstable 06/17] mm: csky: Introduce arch_mmap_hint()
  2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
                   ` (4 preceding siblings ...)
  2024-12-10  2:41 ` [PATCH mm-unstable 05/17] mm: arc: Use generic_mmap_hint() Kalesh Singh
@ 2024-12-10  2:41 ` Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 07/17] mm: loongarch: " Kalesh Singh
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

Introduce csky arch_mmap_hint() and define HAVE_ARCH_MMAP_HINT.
This is a preparatory patch, no functional change is introduced.

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 arch/csky/abiv1/inc/abi/pgtable-bits.h |  1 +
 arch/csky/abiv1/mmap.c                 | 38 ++++++++++++++++++--------
 2 files changed, 27 insertions(+), 12 deletions(-)

diff --git a/arch/csky/abiv1/inc/abi/pgtable-bits.h b/arch/csky/abiv1/inc/abi/pgtable-bits.h
index ae7a2f76dd42..c346a9fcb522 100644
--- a/arch/csky/abiv1/inc/abi/pgtable-bits.h
+++ b/arch/csky/abiv1/inc/abi/pgtable-bits.h
@@ -51,5 +51,6 @@
 					((offset) << 10)})
 
 #define HAVE_ARCH_UNMAPPED_AREA
+#define HAVE_ARCH_MMAP_HINT
 
 #endif /* __ASM_CSKY_PGTABLE_BITS_H */
diff --git a/arch/csky/abiv1/mmap.c b/arch/csky/abiv1/mmap.c
index 1047865e82a9..184921a73856 100644
--- a/arch/csky/abiv1/mmap.c
+++ b/arch/csky/abiv1/mmap.c
@@ -13,6 +13,29 @@
 	((((addr)+SHMLBA-1)&~(SHMLBA-1)) +	\
 	 (((pgoff)<<PAGE_SHIFT) & (SHMLBA-1)))
 
+unsigned long arch_mmap_hint(struct file *filp, unsigned long addr,
+			     unsigned long len, unsigned long pgoff,
+			     unsigned long flags)
+{
+	int do_align = 0;
+
+	if (!addr)
+		return 0;
+
+	/*
+	 * We only need to do colour alignment if either the I or D
+	 * caches alias.
+	 */
+	do_align = filp || (flags & MAP_SHARED);
+
+	if (do_align)
+		addr = COLOUR_ALIGN(addr, pgoff);
+	else
+		addr = PAGE_ALIGN(addr);
+
+	return generic_mmap_hint(filp, addr, len, pgoff, flags);
+}
+
 /*
  * We need to ensure that shared mappings are correctly aligned to
  * avoid aliasing issues with VIPT caches.  We need to ensure that
@@ -27,7 +50,6 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 		unsigned long flags, vm_flags_t vm_flags)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
 	int do_align = 0;
 	struct vm_unmapped_area_info info = {
 		.length = len,
@@ -55,17 +77,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 	if (len > TASK_SIZE)
 		return -ENOMEM;
 
-	if (addr) {
-		if (do_align)
-			addr = COLOUR_ALIGN(addr, pgoff);
-		else
-			addr = PAGE_ALIGN(addr);
-
-		vma = find_vma(mm, addr);
-		if (TASK_SIZE - len >= addr &&
-		    (!vma || addr + len <= vm_start_gap(vma)))
-			return addr;
-	}
+	addr = arch_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		return addr;
 
 	info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0;
 	return vm_unmapped_area(&info);
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH mm-unstable 07/17] mm: loongarch: Introduce arch_mmap_hint()
  2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
                   ` (5 preceding siblings ...)
  2024-12-10  2:41 ` [PATCH mm-unstable 06/17] mm: csky: Introduce arch_mmap_hint() Kalesh Singh
@ 2024-12-10  2:41 ` Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 08/17] mm: mips: Introduce arch_align_mmap_hint() Kalesh Singh
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

Introduce loongarch arch_mmap_hint() and define HAVE_ARCH_MMAP_HINT.
This is a preparatory patch, no functional change is introduced.

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 arch/loongarch/include/asm/pgtable.h |  1 +
 arch/loongarch/mm/mmap.c             | 40 ++++++++++++++++++----------
 2 files changed, 27 insertions(+), 14 deletions(-)

diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h
index da346733a1da..326a6c4b7488 100644
--- a/arch/loongarch/include/asm/pgtable.h
+++ b/arch/loongarch/include/asm/pgtable.h
@@ -624,6 +624,7 @@ static inline long pmd_protnone(pmd_t pmd)
  */
 #define HAVE_ARCH_UNMAPPED_AREA
 #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
+#define HAVE_ARCH_MMAP_HINT
 
 #endif /* !__ASSEMBLY__ */
 
diff --git a/arch/loongarch/mm/mmap.c b/arch/loongarch/mm/mmap.c
index 914e82ff3f65..b7db43fabca1 100644
--- a/arch/loongarch/mm/mmap.c
+++ b/arch/loongarch/mm/mmap.c
@@ -17,12 +17,32 @@
 
 enum mmap_allocation_direction {UP, DOWN};
 
+unsigned long arch_mmap_hint(struct file *filp, unsigned long addr,
+			     unsigned long len, unsigned long pgoff,
+			     unsigned long flags)
+{
+	int do_color_align = 0;
+
+	if (!addr)
+		return 0;
+
+	if (filp || (flags & MAP_SHARED))
+		do_color_align = 1;
+
+	if (do_color_align)
+		addr = COLOUR_ALIGN(addr, pgoff);
+	else
+		addr = PAGE_ALIGN(addr);
+
+	return generic_mmap_hint(filp, addr, len, pgoff, flags);
+}
+
+
 static unsigned long arch_get_unmapped_area_common(struct file *filp,
 	unsigned long addr0, unsigned long len, unsigned long pgoff,
 	unsigned long flags, enum mmap_allocation_direction dir)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
 	unsigned long addr = addr0;
 	int do_color_align;
 	struct vm_unmapped_area_info info = {};
@@ -45,23 +65,15 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp,
 		return addr;
 	}
 
+	/* requesting a specific address */
+	addr = arch_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		return addr;
+
 	do_color_align = 0;
 	if (filp || (flags & MAP_SHARED))
 		do_color_align = 1;
 
-	/* requesting a specific address */
-	if (addr) {
-		if (do_color_align)
-			addr = COLOUR_ALIGN(addr, pgoff);
-		else
-			addr = PAGE_ALIGN(addr);
-
-		vma = find_vma(mm, addr);
-		if (TASK_SIZE - len >= addr &&
-		    (!vma || addr + len <= vm_start_gap(vma)))
-			return addr;
-	}
-
 	info.length = len;
 	info.align_mask = do_color_align ? (PAGE_MASK & SHM_ALIGN_MASK) : 0;
 	info.align_offset = pgoff << PAGE_SHIFT;
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH mm-unstable 08/17] mm: mips: Introduce arch_align_mmap_hint()
  2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
                   ` (6 preceding siblings ...)
  2024-12-10  2:41 ` [PATCH mm-unstable 07/17] mm: loongarch: " Kalesh Singh
@ 2024-12-10  2:41 ` Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 09/17] mm: parisc: " Kalesh Singh
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

Introduce mips arch_mmap_hint() and define HAVE_ARCH_MMAP_HINT.
This is a preparatory patch, no functional change is introduced.

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 arch/mips/include/asm/pgtable.h |  1 +
 arch/mips/mm/mmap.c             | 39 +++++++++++++++++++++------------
 2 files changed, 26 insertions(+), 14 deletions(-)

diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
index c29a551eb0ca..837f25624369 100644
--- a/arch/mips/include/asm/pgtable.h
+++ b/arch/mips/include/asm/pgtable.h
@@ -766,5 +766,6 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
  */
 #define HAVE_ARCH_UNMAPPED_AREA
 #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
+#define HAVE_ARCH_MMAP_HINT
 
 #endif /* _ASM_PGTABLE_H */
diff --git a/arch/mips/mm/mmap.c b/arch/mips/mm/mmap.c
index 5d2a1225785b..cd09a933aad6 100644
--- a/arch/mips/mm/mmap.c
+++ b/arch/mips/mm/mmap.c
@@ -26,12 +26,31 @@ EXPORT_SYMBOL(shm_align_mask);
 
 enum mmap_allocation_direction {UP, DOWN};
 
+unsigned long arch_mmap_hint(struct file *filp, unsigned long addr,
+			     unsigned long len, unsigned long pgoff,
+			     unsigned long flags)
+{
+	int do_color_align = 0;
+
+	if (!addr)
+		return 0;
+
+	if (filp || (flags & MAP_SHARED))
+		do_color_align = 1;
+
+	if (do_color_align)
+		addr = COLOUR_ALIGN(addr, pgoff);
+	else
+		addr = PAGE_ALIGN(addr);
+
+	return generic_mmap_hint(filp, addr, len, pgoff, flags);
+}
+
 static unsigned long arch_get_unmapped_area_common(struct file *filp,
 	unsigned long addr0, unsigned long len, unsigned long pgoff,
 	unsigned long flags, enum mmap_allocation_direction dir)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
 	unsigned long addr = addr0;
 	int do_color_align;
 	struct vm_unmapped_area_info info = {};
@@ -54,23 +73,15 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp,
 		return addr;
 	}
 
+	/* requesting a specific address */
+	addr = arch_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		return addr;
+
 	do_color_align = 0;
 	if (filp || (flags & MAP_SHARED))
 		do_color_align = 1;
 
-	/* requesting a specific address */
-	if (addr) {
-		if (do_color_align)
-			addr = COLOUR_ALIGN(addr, pgoff);
-		else
-			addr = PAGE_ALIGN(addr);
-
-		vma = find_vma(mm, addr);
-		if (TASK_SIZE - len >= addr &&
-		    (!vma || addr + len <= vm_start_gap(vma)))
-			return addr;
-	}
-
 	info.length = len;
 	info.align_mask = do_color_align ? (PAGE_MASK & shm_align_mask) : 0;
 	info.align_offset = pgoff << PAGE_SHIFT;
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH mm-unstable 09/17] mm: parisc: Introduce arch_align_mmap_hint()
  2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
                   ` (7 preceding siblings ...)
  2024-12-10  2:41 ` [PATCH mm-unstable 08/17] mm: mips: Introduce arch_align_mmap_hint() Kalesh Singh
@ 2024-12-10  2:41 ` Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 10/17] mm: s390: Introduce arch_mmap_hint() Kalesh Singh
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

Introduce parisc arch_mmap_hint() and define HAVE_ARCH_MMAP_HINT.
This is a preparatory patch, no functional change is introduced.

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 arch/parisc/include/asm/pgtable.h |  1 +
 arch/parisc/kernel/sys_parisc.c   | 37 ++++++++++++++++++++-----------
 2 files changed, 25 insertions(+), 13 deletions(-)

diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h
index babf65751e81..73987357c78e 100644
--- a/arch/parisc/include/asm/pgtable.h
+++ b/arch/parisc/include/asm/pgtable.h
@@ -505,6 +505,7 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
 
 #define HAVE_ARCH_UNMAPPED_AREA
 #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
+#define HAVE_ARCH_MMAP_HINT
 
 #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
 #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c
index f852fe274abe..8ab05b29c505 100644
--- a/arch/parisc/kernel/sys_parisc.c
+++ b/arch/parisc/kernel/sys_parisc.c
@@ -96,12 +96,32 @@ unsigned long mmap_upper_limit(struct rlimit *rlim_stack)
 
 enum mmap_allocation_direction {UP, DOWN};
 
+unsigned long arch_mmap_hint(struct file *filp, unsigned long addr,
+			     unsigned long len, unsigned long pgoff,
+			     unsigned long flags)
+{
+	unsigned long filp_pgoff = GET_FILP_PGOFF(filp);
+	int do_color_align = 0;
+
+	if (!addr)
+		return 0;
+
+	if (filp || (flags & MAP_SHARED))
+		do_color_align = 1;
+
+	if (do_color_align)
+		addr = COLOR_ALIGN(addr, filp_pgoff, pgoff);
+	else
+		addr = PAGE_ALIGN(addr);
+
+	return generic_mmap_hint(filp, addr, len, pgoff, flags);
+}
+
 static unsigned long arch_get_unmapped_area_common(struct file *filp,
 	unsigned long addr, unsigned long len, unsigned long pgoff,
 	unsigned long flags, enum mmap_allocation_direction dir)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma, *prev;
 	unsigned long filp_pgoff;
 	int do_color_align;
 	struct vm_unmapped_area_info info = {
@@ -128,18 +148,9 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp,
 		return addr;
 	}
 
-	if (addr) {
-		if (do_color_align)
-			addr = COLOR_ALIGN(addr, filp_pgoff, pgoff);
-		else
-			addr = PAGE_ALIGN(addr);
-
-		vma = find_vma_prev(mm, addr, &prev);
-		if (TASK_SIZE - len >= addr &&
-		    (!vma || addr + len <= vm_start_gap(vma)) &&
-		    (!prev || addr >= vm_end_gap(prev)))
-			return addr;
-	}
+	addr = arch_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		return addr;
 
 	info.align_mask = do_color_align ? (PAGE_MASK & (SHM_COLOUR - 1)) : 0;
 	info.align_offset = shared_align_offset(filp_pgoff, pgoff);
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH mm-unstable 10/17] mm: s390: Introduce arch_mmap_hint()
  2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
                   ` (8 preceding siblings ...)
  2024-12-10  2:41 ` [PATCH mm-unstable 09/17] mm: parisc: " Kalesh Singh
@ 2024-12-10  2:41 ` Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 11/17] mm: sh: " Kalesh Singh
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

Introduce s390 arch_mmap_hint() and define HAVE_ARCH_MMAP_HINT.
This is a preparatory patch, no functional change is introduced.

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 arch/s390/include/asm/pgtable.h |  1 +
 arch/s390/mm/mmap.c             | 32 ++++++++++++++++----------------
 2 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index 48268095b0a3..eaecb558ab9b 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -1997,6 +1997,7 @@ extern void s390_reset_cmma(struct mm_struct *mm);
 /* s390 has a private copy of get unmapped area to deal with cache synonyms */
 #define HAVE_ARCH_UNMAPPED_AREA
 #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
+#define HAVE_ARCH_MMAP_HINT
 
 #define pmd_pgtable(pmd) \
 	((pgtable_t)__va(pmd_val(pmd) & -sizeof(pte_t)*PTRS_PER_PTE))
diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
index 33f3504be90b..3f82401b77cd 100644
--- a/arch/s390/mm/mmap.c
+++ b/arch/s390/mm/mmap.c
@@ -83,12 +83,21 @@ static int get_align_mask(struct file *filp, unsigned long flags)
 	return 0;
 }
 
+unsigned long arch_mmap_hint(struct file *filp, unsigned long addr,
+			     unsigned long len, unsigned long pgoff,
+			     unsigned long flags)
+{
+	if (len > TASK_SIZE - mmap_min_addr)
+		return 0;
+
+	return generic_mmap_hint(filp, addr, len, pgoff, flags);
+}
+
 unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
 				     unsigned long len, unsigned long pgoff,
 				     unsigned long flags, vm_flags_t vm_flags)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
 	struct vm_unmapped_area_info info = {};
 
 	if (len > TASK_SIZE - mmap_min_addr)
@@ -97,13 +106,9 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
 	if (flags & MAP_FIXED)
 		goto check_asce_limit;
 
-	if (addr) {
-		addr = PAGE_ALIGN(addr);
-		vma = find_vma(mm, addr);
-		if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
-		    (!vma || addr + len <= vm_start_gap(vma)))
-			goto check_asce_limit;
-	}
+	addr = arch_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		goto check_asce_limit;
 
 	info.length = len;
 	info.low_limit = mm->mmap_base;
@@ -123,7 +128,6 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp, unsigned long ad
 					     unsigned long len, unsigned long pgoff,
 					     unsigned long flags, vm_flags_t vm_flags)
 {
-	struct vm_area_struct *vma;
 	struct mm_struct *mm = current->mm;
 	struct vm_unmapped_area_info info = {};
 
@@ -135,13 +139,9 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp, unsigned long ad
 		goto check_asce_limit;
 
 	/* requesting a specific address */
-	if (addr) {
-		addr = PAGE_ALIGN(addr);
-		vma = find_vma(mm, addr);
-		if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
-				(!vma || addr + len <= vm_start_gap(vma)))
-			goto check_asce_limit;
-	}
+	addr = arch_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		goto check_asce_limit;
 
 	info.flags = VM_UNMAPPED_AREA_TOPDOWN;
 	info.length = len;
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH mm-unstable 11/17] mm: sh: Introduce arch_mmap_hint()
  2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
                   ` (9 preceding siblings ...)
  2024-12-10  2:41 ` [PATCH mm-unstable 10/17] mm: s390: Introduce arch_mmap_hint() Kalesh Singh
@ 2024-12-10  2:41 ` Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 12/17] mm: sparc32: " Kalesh Singh
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

Introduce sh arch_mmap_hint() and define HAVE_ARCH_MMAP_HINT.
This is a preparatory patch, no functional change is introduced.

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 arch/sh/include/asm/pgtable.h |  1 +
 arch/sh/mm/mmap.c             | 48 +++++++++++++++++++----------------
 2 files changed, 27 insertions(+), 22 deletions(-)

diff --git a/arch/sh/include/asm/pgtable.h b/arch/sh/include/asm/pgtable.h
index 729f5c6225fb..072dbe038808 100644
--- a/arch/sh/include/asm/pgtable.h
+++ b/arch/sh/include/asm/pgtable.h
@@ -149,5 +149,6 @@ static inline bool pte_access_permitted(pte_t pte, bool write)
 /* arch/sh/mm/mmap.c */
 #define HAVE_ARCH_UNMAPPED_AREA
 #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
+#define HAVE_ARCH_MMAP_HINT
 
 #endif /* __ASM_SH_PGTABLE_H */
diff --git a/arch/sh/mm/mmap.c b/arch/sh/mm/mmap.c
index c442734d9b0c..5c96055dd5f5 100644
--- a/arch/sh/mm/mmap.c
+++ b/arch/sh/mm/mmap.c
@@ -51,6 +51,26 @@ static inline unsigned long COLOUR_ALIGN(unsigned long addr,
 	return base + off;
 }
 
+unsigned long arch_mmap_hint(struct file *filp, unsigned long addr,
+			     unsigned long len, unsigned long pgoff,
+			     unsigned long flags)
+{
+	int do_color_align = 0;
+
+	if (!addr)
+		return 0;
+
+	if (filp || (flags & MAP_SHARED))
+		do_color_align = 1;
+
+	if (do_color_align)
+		addr = COLOUR_ALIGN(addr, pgoff);
+	else
+		addr = PAGE_ALIGN(addr);
+
+	return generic_mmap_hint(filp, addr, len, pgoff, flags);
+}
+
 unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
 	unsigned long len, unsigned long pgoff, unsigned long flags,
 	vm_flags_t vm_flags)
@@ -77,17 +97,9 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
 	if (filp || (flags & MAP_SHARED))
 		do_colour_align = 1;
 
-	if (addr) {
-		if (do_colour_align)
-			addr = COLOUR_ALIGN(addr, pgoff);
-		else
-			addr = PAGE_ALIGN(addr);
-
-		vma = find_vma(mm, addr);
-		if (TASK_SIZE - len >= addr &&
-		    (!vma || addr + len <= vm_start_gap(vma)))
-			return addr;
-	}
+	addr = arch_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		return addr;
 
 	info.length = len;
 	info.low_limit = TASK_UNMAPPED_BASE;
@@ -126,17 +138,9 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 		do_colour_align = 1;
 
 	/* requesting a specific address */
-	if (addr) {
-		if (do_colour_align)
-			addr = COLOUR_ALIGN(addr, pgoff);
-		else
-			addr = PAGE_ALIGN(addr);
-
-		vma = find_vma(mm, addr);
-		if (TASK_SIZE - len >= addr &&
-		    (!vma || addr + len <= vm_start_gap(vma)))
-			return addr;
-	}
+	addr = arch_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		return addr;
 
 	info.flags = VM_UNMAPPED_AREA_TOPDOWN;
 	info.length = len;
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH mm-unstable 12/17] mm: sparc32: Introduce arch_mmap_hint()
  2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
                   ` (10 preceding siblings ...)
  2024-12-10  2:41 ` [PATCH mm-unstable 11/17] mm: sh: " Kalesh Singh
@ 2024-12-10  2:41 ` Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 13/17] mm: sparc64: " Kalesh Singh
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

Introduce sparc32 arch_mmap_hint() and define HAVE_ARCH_MMAP_HINT.

If a sufficiently sized hole doesn't exist at the hint address,
fallback to searching the entire valid VA space instead of only
the VA space above the hint address.

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 arch/sparc/include/asm/pgtable_32.h |  1 +
 arch/sparc/kernel/sys_sparc_32.c    | 21 ++++++++++++++++++---
 2 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h
index 62bcafe38b1f..95084c4d0b01 100644
--- a/arch/sparc/include/asm/pgtable_32.h
+++ b/arch/sparc/include/asm/pgtable_32.h
@@ -437,6 +437,7 @@ static inline int io_remap_pfn_range(struct vm_area_struct *vma,
 
 /* We provide our own get_unmapped_area to cope with VA holes for userland */
 #define HAVE_ARCH_UNMAPPED_AREA
+#define HAVE_ARCH_MMAP_HINT
 
 #define pmd_pgtable(pmd)	((pgtable_t)__pmd_page(pmd))
 
diff --git a/arch/sparc/kernel/sys_sparc_32.c b/arch/sparc/kernel/sys_sparc_32.c
index fb31bc0c5b48..2d5065ee1a94 100644
--- a/arch/sparc/kernel/sys_sparc_32.c
+++ b/arch/sparc/kernel/sys_sparc_32.c
@@ -40,6 +40,19 @@ SYSCALL_DEFINE0(getpagesize)
 	return PAGE_SIZE; /* Possibly older binaries want 8192 on sun4's? */
 }
 
+unsigned long arch_mmap_hint(struct file *filp, unsigned long addr,
+			     unsigned long len, unsigned long pgoff,
+			     unsigned long flags)
+{
+	if (!addr)
+		return 0;
+
+	if (len > TASK_SIZE - PAGE_SIZE)
+		return 0;
+
+	return generic_mmap_hint(filp, addr, len, pgoff, flags);
+}
+
 unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags)
 {
 	struct vm_unmapped_area_info info = {};
@@ -61,11 +74,13 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
 	/* See asm-sparc/uaccess.h */
 	if (len > TASK_SIZE - PAGE_SIZE)
 		return -ENOMEM;
-	if (!addr)
-		addr = TASK_UNMAPPED_BASE;
+
+	addr = arch_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		return addr;
 
 	info.length = len;
-	info.low_limit = addr;
+	info.low_limit = TASK_UNMAPPED_BASE;
 	info.high_limit = TASK_SIZE;
 	if (!file_hugepage) {
 		info.align_mask = (flags & MAP_SHARED) ?
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH mm-unstable 13/17] mm: sparc64: Introduce arch_mmap_hint()
  2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
                   ` (11 preceding siblings ...)
  2024-12-10  2:41 ` [PATCH mm-unstable 12/17] mm: sparc32: " Kalesh Singh
@ 2024-12-10  2:41 ` Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 14/17] mm: xtensa: " Kalesh Singh
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

Introduce sparc64 arch_mmap_hint() and define HAVE_ARCH_MMAP_HINT.
This is a preparatory patch, no functional change is introduced.

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 arch/sparc/include/asm/pgtable_64.h |  1 +
 arch/sparc/kernel/sys_sparc_64.c    | 47 +++++++++++++++++++++--------
 2 files changed, 35 insertions(+), 13 deletions(-)

diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index 2b7f358762c1..f24a4eb2777b 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -1148,6 +1148,7 @@ static inline bool pte_access_permitted(pte_t pte, bool write)
  */
 #define HAVE_ARCH_UNMAPPED_AREA
 #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
+#define HAVE_ARCH_MMAP_HINT
 
 /* We provide a special get_unmapped_area for framebuffer mmaps to try and use
  * the largest alignment possible such that larget PTEs can be used.
diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c
index c5a284df7b41..a782696e98e0 100644
--- a/arch/sparc/kernel/sys_sparc_64.c
+++ b/arch/sparc/kernel/sys_sparc_64.c
@@ -98,10 +98,39 @@ static unsigned long get_align_mask(struct file *filp, unsigned long flags)
 	return 0;
 }
 
+unsigned long arch_mmap_hint(struct file *filp, unsigned long addr,
+			     unsigned long len, unsigned long pgoff,
+			     unsigned long flags)
+{
+	unsigned long task_size = TASK_SIZE;
+	bool file_hugepage = false;
+	int do_color_align = 0;
+
+	if (!addr)
+		return 0;
+
+	if (filp && is_file_hugepages(filp))
+		file_hugepage = true;
+
+	if ((filp || (flags & MAP_SHARED)) && !file_hugepage)
+		do_color_align = 1;
+
+	if (test_thread_flag(TIF_32BIT))
+		task_size = STACK_TOP32;
+
+	if (unlikely(len > task_size || len >= VA_EXCLUDE_START))
+		return 0;
+
+	if (do_color_align)
+		addr = COLOR_ALIGN(addr, pgoff);
+	else
+		addr = PAGE_ALIGN(addr);
+
+	return generic_mmap_hint(filp, addr, len, pgoff, flags);
+}
+
 unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags)
 {
-	struct mm_struct *mm = current->mm;
-	struct vm_area_struct * vma;
 	unsigned long task_size = TASK_SIZE;
 	int do_color_align;
 	struct vm_unmapped_area_info info = {};
@@ -129,17 +158,9 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
 	if ((filp || (flags & MAP_SHARED)) && !file_hugepage)
 		do_color_align = 1;
 
-	if (addr) {
-		if (do_color_align)
-			addr = COLOR_ALIGN(addr, pgoff);
-		else
-			addr = PAGE_ALIGN(addr);
-
-		vma = find_vma(mm, addr);
-		if (task_size - len >= addr &&
-		    (!vma || addr + len <= vm_start_gap(vma)))
-			return addr;
-	}
+	addr = arch_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		return addr;
 
 	info.length = len;
 	info.low_limit = TASK_UNMAPPED_BASE;
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH mm-unstable 14/17] mm: xtensa: Introduce arch_mmap_hint()
  2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
                   ` (12 preceding siblings ...)
  2024-12-10  2:41 ` [PATCH mm-unstable 13/17] mm: sparc64: " Kalesh Singh
@ 2024-12-10  2:41 ` Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 15/17] mm: powerpc: " Kalesh Singh
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

Introduce xtensa arch_mmap_hint() and define HAVE_ARCH_MMAP_HINT.

If a sufficiently sized hole doesn't exist at the hint address,
fallback to searching the entire valid VA space instead of only
the VA space above the hint address.

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 arch/xtensa/include/asm/pgtable.h |  1 +
 arch/xtensa/kernel/syscall.c      | 26 ++++++++++++++++++++++++--
 2 files changed, 25 insertions(+), 2 deletions(-)

diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h
index 1647a7cc3fbf..31b7da0805ec 100644
--- a/arch/xtensa/include/asm/pgtable.h
+++ b/arch/xtensa/include/asm/pgtable.h
@@ -425,5 +425,6 @@ void update_mmu_tlb_range(struct vm_area_struct *vma,
  * SHM area cache aliasing for userland.
  */
 #define HAVE_ARCH_UNMAPPED_AREA
+#define HAVE_ARCH_MMAP_HINT
 
 #endif /* _XTENSA_PGTABLE_H */
diff --git a/arch/xtensa/kernel/syscall.c b/arch/xtensa/kernel/syscall.c
index dc54f854c2f5..353cce1ac9f1 100644
--- a/arch/xtensa/kernel/syscall.c
+++ b/arch/xtensa/kernel/syscall.c
@@ -54,6 +54,24 @@ asmlinkage long xtensa_fadvise64_64(int fd, int advice,
 }
 
 #ifdef CONFIG_MMU
+unsigned long arch_mmap_hint(struct file *filp, unsigned long addr,
+			     unsigned long len, unsigned long pgoff,
+			     unsigned long flags)
+{
+	if (!addr)
+		return 0;
+
+	if (len > TASK_SIZE)
+		return 0;
+
+	if (flags & MAP_SHARED)
+		addr = COLOUR_ALIGN(addr, pgoff);
+	else
+		addr = PAGE_ALIGN(addr);
+
+	return generic_mmap_hint(filp, addr, len, pgoff, flags);
+}
+
 unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
 		unsigned long len, unsigned long pgoff, unsigned long flags,
 		vm_flags_t vm_flags)
@@ -73,8 +91,12 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
 
 	if (len > TASK_SIZE)
 		return -ENOMEM;
-	if (!addr)
-		addr = TASK_UNMAPPED_BASE;
+
+	addr = arch_mmap_hint(filp, addr, len, pgoff, flags);
+	if (addr)
+		return addr;
+
+	addr = TASK_UNMAPPED_BASE;
 
 	if (flags & MAP_SHARED)
 		addr = COLOUR_ALIGN(addr, pgoff);
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH mm-unstable 15/17] mm: powerpc: Introduce arch_mmap_hint()
  2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
                   ` (13 preceding siblings ...)
  2024-12-10  2:41 ` [PATCH mm-unstable 14/17] mm: xtensa: " Kalesh Singh
@ 2024-12-10  2:41 ` Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 16/17] mm: Fallback to generic_mmap_hint() Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 17/17] mm: Respect mmap hint before THP alignment if allocation is possible Kalesh Singh
  16 siblings, 0 replies; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

Introduce powerpc arch_mmap_hint() and define HAVE_ARCH_MMAP_HINT.
This is a preparatory patch, no functional change is introduced.

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 arch/powerpc/include/asm/book3s/64/slice.h |  1 +
 arch/powerpc/mm/book3s64/slice.c           | 31 ++++++++++++++++++++++
 2 files changed, 32 insertions(+)

diff --git a/arch/powerpc/include/asm/book3s/64/slice.h b/arch/powerpc/include/asm/book3s/64/slice.h
index 5fbe18544cbd..89f629080e90 100644
--- a/arch/powerpc/include/asm/book3s/64/slice.h
+++ b/arch/powerpc/include/asm/book3s/64/slice.h
@@ -10,6 +10,7 @@
 #endif
 #define HAVE_ARCH_UNMAPPED_AREA
 #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
+#define HAVE_ARCH_MMAP_HINT
 #endif
 
 #define SLICE_LOW_SHIFT		28
diff --git a/arch/powerpc/mm/book3s64/slice.c b/arch/powerpc/mm/book3s64/slice.c
index bc9a39821d1c..70b95968301a 100644
--- a/arch/powerpc/mm/book3s64/slice.c
+++ b/arch/powerpc/mm/book3s64/slice.c
@@ -647,6 +647,37 @@ static int file_to_psize(struct file *file)
 }
 #endif
 
+static unsigned long slice_mmap_hint(unsigned long addr, unsigned long len,
+				     unsigned long flags, unsigned int psize)
+{
+	unsigned long hint_addr = slice_get_unmapped_area(addr, len, flags, psize, 0);
+
+	if (IS_ERR_VALUE(hint_addr) || hint_addr != PAGE_ALIGN(addr))
+		return 0;
+
+	return hint_addr;
+}
+
+unsigned long arch_mmap_hint(struct file *filp, unsigned long addr,
+			     unsigned long len, unsigned long pgoff,
+			     unsigned long flags)
+{
+	unsigned int psize;
+
+	if (!addr)
+		return 0;
+
+	if (radix_enabled())
+		return generic_mmap_hint(filp, addr, len, pgoff, flags);
+
+	if (filp && is_file_hugepages(filp))
+		psize = file_to_psize(filp);
+	else
+		psize = mm_ctx_user_psize(&current->mm->context);
+
+	return slice_mmap_hint(addr, len, flags, psize);
+}
+
 unsigned long arch_get_unmapped_area(struct file *filp,
 				     unsigned long addr,
 				     unsigned long len,
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH mm-unstable 16/17] mm: Fallback to generic_mmap_hint()
  2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
                   ` (14 preceding siblings ...)
  2024-12-10  2:41 ` [PATCH mm-unstable 15/17] mm: powerpc: " Kalesh Singh
@ 2024-12-10  2:41 ` Kalesh Singh
  2024-12-10  2:41 ` [PATCH mm-unstable 17/17] mm: Respect mmap hint before THP alignment if allocation is possible Kalesh Singh
  16 siblings, 0 replies; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

If an architecture doesn't provide arch_mmap_hint() fallback to
generic_mmap_hint().

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 mm/mmap.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/mm/mmap.c b/mm/mmap.c
index e97eb8bf4889..59bf7d127aa1 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -691,6 +691,15 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 }
 #endif
 
+#ifndef HAVE_ARCH_MMAP_HINT
+unsigned long arch_mmap_hint(struct file *filp, unsigned long addr,
+			     unsigned long len, unsigned long pgoff,
+			     unsigned long flags)
+{
+	return generic_mmap_hint(filp, addr, len, pgoff, flags);
+}
+#endif
+
 /*
  * This mmap-allocator allocates new areas top-down from below the
  * stack's low limit (the base):
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH mm-unstable 17/17] mm: Respect mmap hint before THP alignment if allocation is possible
  2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
                   ` (15 preceding siblings ...)
  2024-12-10  2:41 ` [PATCH mm-unstable 16/17] mm: Fallback to generic_mmap_hint() Kalesh Singh
@ 2024-12-10  2:41 ` Kalesh Singh
  2024-12-10  3:37   ` Yang Shi
  16 siblings, 1 reply; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10  2:41 UTC (permalink / raw)
  To: akpm, vbabka, yang, riel, david
  Cc: linux, tsbogend, James.Bottomley, ysato, dalias, glaubitz, davem,
	andreas, tglx, bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas,
	jason.andryuk, leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm, Kalesh Singh

Commit 249608ee4713 ("mm: respect mmap hint address when aligning for THP")
fallsback to PAGE_SIZE alignment instead of THP alignment
for anonymous mapping as long as a hint address is provided by the user
-- even if we weren't able to allocate the unmapped area at the hint
address in the end.

This was done to address the immediate regression in anonymous mappings
where the hint address were being ignored in some cases; due to commit
efa7df3e3bb5 ("mm: align larger anonymous mappings on THP boundaries").

It was later pointed out that this issue also existed for file-backed
mappings from file systems that use thp_get_unmapped_area() for their
.get_unmapped_area() file operation.

The same fix was not applied for file-backed mappings since it would
mean any mmap requests that provide a hint address would be only
PAGE_SIZE-aligned regardless of whether allocation was successful at
the hint address or not.

Instead, use arch_mmap_hint() to first attempt allocation at the hint
address and fallback to THP alignment if that fails.

Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
---
 mm/huge_memory.c | 15 ++++++++-------
 mm/mmap.c        |  1 -
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 137abeda8602..f070c89dafc9 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1097,6 +1097,14 @@ static unsigned long __thp_get_unmapped_area(struct file *filp,
 	loff_t off_align = round_up(off, size);
 	unsigned long len_pad, ret, off_sub;
 
+	/*
+	 * If allocation at the address hint succeeds; respect the hint and
+	 * don't try to align to THP boundary.
+	 */
+	addr = arch_mmap_hint(filp, addr, len, off, flags);
+	if (addr)
+		return addr;
+
 	if (!IS_ENABLED(CONFIG_64BIT) || in_compat_syscall())
 		return 0;
 
@@ -1117,13 +1125,6 @@ static unsigned long __thp_get_unmapped_area(struct file *filp,
 	if (IS_ERR_VALUE(ret))
 		return 0;
 
-	/*
-	 * Do not try to align to THP boundary if allocation at the address
-	 * hint succeeds.
-	 */
-	if (ret == addr)
-		return addr;
-
 	off_sub = (off - ret) & (size - 1);
 
 	if (test_bit(MMF_TOPDOWN, &current->mm->flags) && !off_sub)
diff --git a/mm/mmap.c b/mm/mmap.c
index 59bf7d127aa1..6bfeec80152a 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -807,7 +807,6 @@ __get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
 	if (get_area) {
 		addr = get_area(file, addr, len, pgoff, flags);
 	} else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && !file
-		   && !addr /* no hint */
 		   && IS_ALIGNED(len, PMD_SIZE)) {
 		/* Ensures that larger anonymous mappings are THP aligned. */
 		addr = thp_get_unmapped_area_vmflags(file, addr, len,
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH mm-unstable 01/17] mm: Introduce generic_mmap_hint()
  2024-12-10  2:41 ` [PATCH mm-unstable 01/17] mm: Introduce generic_mmap_hint() Kalesh Singh
@ 2024-12-10  3:27   ` Yang Shi
  2024-12-10 17:55     ` Kalesh Singh
  0 siblings, 1 reply; 23+ messages in thread
From: Yang Shi @ 2024-12-10  3:27 UTC (permalink / raw)
  To: Kalesh Singh
  Cc: akpm, vbabka, yang, riel, david, linux, tsbogend,
	James.Bottomley, ysato, dalias, glaubitz, davem, andreas, tglx,
	bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas, jason.andryuk,
	leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm

On Mon, Dec 9, 2024 at 6:41 PM Kalesh Singh <kaleshsingh@google.com> wrote:
>
> Consolidate the hint searches from both direcitons (topdown and
> bottomup) into generic_mmap_hint().
>
> No functional change is introduced.
>
> Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
> ---
>  include/linux/sched/mm.h |  4 ++++
>  mm/mmap.c                | 45 ++++++++++++++++++++++++----------------
>  2 files changed, 31 insertions(+), 18 deletions(-)
>
> diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
> index 928a626725e6..edeec19d1708 100644
> --- a/include/linux/sched/mm.h
> +++ b/include/linux/sched/mm.h
> @@ -201,6 +201,10 @@ unsigned long mm_get_unmapped_area_vmflags(struct mm_struct *mm,
>                                            unsigned long flags,
>                                            vm_flags_t vm_flags);
>
> +unsigned long generic_mmap_hint(struct file *filp, unsigned long addr,
> +                               unsigned long len, unsigned long pgoff,
> +                               unsigned long flags);
> +
>  unsigned long
>  generic_get_unmapped_area(struct file *filp, unsigned long addr,
>                           unsigned long len, unsigned long pgoff,
> diff --git a/mm/mmap.c b/mm/mmap.c
> index df9154b15ef9..e97eb8bf4889 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -620,6 +620,27 @@ unsigned long vm_unmapped_area(struct vm_unmapped_area_info *info)
>         return addr;
>  }
>
> +unsigned long generic_mmap_hint(struct file *filp, unsigned long addr,
> +                               unsigned long len, unsigned long pgoff,
> +                               unsigned long flags)
> +{
> +       struct mm_struct *mm = current->mm;
> +       struct vm_area_struct *vma, *prev;
> +       const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags);
> +
> +       if (!addr)
> +               return 0;
> +
> +       addr = PAGE_ALIGN(addr);
> +       vma = find_vma_prev(mm, addr, &prev);
> +       if (mmap_end - len >= addr && addr >= mmap_min_addr &&
> +           (!vma || addr + len <= vm_start_gap(vma)) &&
> +           (!prev || addr >= vm_end_gap(prev)))
> +               return addr;
> +
> +       return 0;
> +}
> +
>  /* Get an address range which is currently unmapped.
>   * For shmat() with addr=0.
>   *
> @@ -637,7 +658,6 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr,
>                           unsigned long flags, vm_flags_t vm_flags)
>  {
>         struct mm_struct *mm = current->mm;
> -       struct vm_area_struct *vma, *prev;
>         struct vm_unmapped_area_info info = {};
>         const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags);
>
> @@ -647,14 +667,9 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr,
>         if (flags & MAP_FIXED)
>                 return addr;

It seems you also can move the MAP_FIXED case into generic_mmap_hint(), right?

>
> -       if (addr) {
> -               addr = PAGE_ALIGN(addr);
> -               vma = find_vma_prev(mm, addr, &prev);
> -               if (mmap_end - len >= addr && addr >= mmap_min_addr &&
> -                   (!vma || addr + len <= vm_start_gap(vma)) &&
> -                   (!prev || addr >= vm_end_gap(prev)))
> -                       return addr;
> -       }
> +       addr = generic_mmap_hint(filp, addr, len, pgoff, flags);
> +       if (addr)
> +               return addr;
>
>         info.length = len;
>         info.low_limit = mm->mmap_base;
> @@ -685,7 +700,6 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
>                                   unsigned long len, unsigned long pgoff,
>                                   unsigned long flags, vm_flags_t vm_flags)
>  {
> -       struct vm_area_struct *vma, *prev;
>         struct mm_struct *mm = current->mm;
>         struct vm_unmapped_area_info info = {};
>         const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags);
> @@ -698,14 +712,9 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
>                 return addr;
>
>         /* requesting a specific address */
> -       if (addr) {
> -               addr = PAGE_ALIGN(addr);
> -               vma = find_vma_prev(mm, addr, &prev);
> -               if (mmap_end - len >= addr && addr >= mmap_min_addr &&
> -                               (!vma || addr + len <= vm_start_gap(vma)) &&
> -                               (!prev || addr >= vm_end_gap(prev)))
> -                       return addr;
> -       }
> +       addr = generic_mmap_hint(filp, addr, len, pgoff, flags);
> +       if (addr)
> +               return addr;
>
>         info.flags = VM_UNMAPPED_AREA_TOPDOWN;
>         info.length = len;
> --
> 2.47.0.338.g60cca15819-goog
>
>


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH mm-unstable 17/17] mm: Respect mmap hint before THP alignment if allocation is possible
  2024-12-10  2:41 ` [PATCH mm-unstable 17/17] mm: Respect mmap hint before THP alignment if allocation is possible Kalesh Singh
@ 2024-12-10  3:37   ` Yang Shi
  2024-12-10 17:34     ` Kalesh Singh
  0 siblings, 1 reply; 23+ messages in thread
From: Yang Shi @ 2024-12-10  3:37 UTC (permalink / raw)
  To: Kalesh Singh
  Cc: akpm, vbabka, yang, riel, david, linux, tsbogend,
	James.Bottomley, ysato, dalias, glaubitz, davem, andreas, tglx,
	bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas, jason.andryuk,
	leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm

On Mon, Dec 9, 2024 at 6:45 PM Kalesh Singh <kaleshsingh@google.com> wrote:
>
> Commit 249608ee4713 ("mm: respect mmap hint address when aligning for THP")
> fallsback to PAGE_SIZE alignment instead of THP alignment
> for anonymous mapping as long as a hint address is provided by the user
> -- even if we weren't able to allocate the unmapped area at the hint
> address in the end.
>
> This was done to address the immediate regression in anonymous mappings
> where the hint address were being ignored in some cases; due to commit
> efa7df3e3bb5 ("mm: align larger anonymous mappings on THP boundaries").
>
> It was later pointed out that this issue also existed for file-backed
> mappings from file systems that use thp_get_unmapped_area() for their
> .get_unmapped_area() file operation.
>
> The same fix was not applied for file-backed mappings since it would
> mean any mmap requests that provide a hint address would be only
> PAGE_SIZE-aligned regardless of whether allocation was successful at
> the hint address or not.
>
> Instead, use arch_mmap_hint() to first attempt allocation at the hint
> address and fallback to THP alignment if that fails.

Thanks for taking time to try to fix this.

>
> Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
> ---
>  mm/huge_memory.c | 15 ++++++++-------
>  mm/mmap.c        |  1 -
>  2 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 137abeda8602..f070c89dafc9 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1097,6 +1097,14 @@ static unsigned long __thp_get_unmapped_area(struct file *filp,
>         loff_t off_align = round_up(off, size);
>         unsigned long len_pad, ret, off_sub;
>
> +       /*
> +        * If allocation at the address hint succeeds; respect the hint and
> +        * don't try to align to THP boundary.
> +        */
> +       addr = arch_mmap_hint(filp, addr, len, off, flags);
> +       if (addr)
> +               return addr;
> +

IIUC, arch_mmap_hint() will be called in arch_get_unmapped_area() and
arch_get_unmapped_area_topdown() again. So we will actually look up
maple tree twice. It sounds like the second hint address search is
pointless. You should be able to set addr to 0 before calling
mm_get_unmapped_area_vmflags() in order to skip the second hint
address search.

>         if (!IS_ENABLED(CONFIG_64BIT) || in_compat_syscall())
>                 return 0;
>
> @@ -1117,13 +1125,6 @@ static unsigned long __thp_get_unmapped_area(struct file *filp,
>         if (IS_ERR_VALUE(ret))
>                 return 0;
>
> -       /*
> -        * Do not try to align to THP boundary if allocation at the address
> -        * hint succeeds.
> -        */
> -       if (ret == addr)
> -               return addr;
> -
>         off_sub = (off - ret) & (size - 1);
>
>         if (test_bit(MMF_TOPDOWN, &current->mm->flags) && !off_sub)
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 59bf7d127aa1..6bfeec80152a 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -807,7 +807,6 @@ __get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
>         if (get_area) {
>                 addr = get_area(file, addr, len, pgoff, flags);
>         } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && !file
> -                  && !addr /* no hint */
>                    && IS_ALIGNED(len, PMD_SIZE)) {
>                 /* Ensures that larger anonymous mappings are THP aligned. */
>                 addr = thp_get_unmapped_area_vmflags(file, addr, len,
> --
> 2.47.0.338.g60cca15819-goog
>
>


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH mm-unstable 17/17] mm: Respect mmap hint before THP alignment if allocation is possible
  2024-12-10  3:37   ` Yang Shi
@ 2024-12-10 17:34     ` Kalesh Singh
  2024-12-10 19:39       ` Yang Shi
  0 siblings, 1 reply; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10 17:34 UTC (permalink / raw)
  To: Yang Shi
  Cc: akpm, vbabka, yang, riel, david, linux, tsbogend,
	James.Bottomley, ysato, dalias, glaubitz, davem, andreas, tglx,
	bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas, jason.andryuk,
	leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm

On Mon, Dec 9, 2024 at 7:37 PM Yang Shi <shy828301@gmail.com> wrote:
>
> On Mon, Dec 9, 2024 at 6:45 PM Kalesh Singh <kaleshsingh@google.com> wrote:
> >
> > Commit 249608ee4713 ("mm: respect mmap hint address when aligning for THP")
> > fallsback to PAGE_SIZE alignment instead of THP alignment
> > for anonymous mapping as long as a hint address is provided by the user
> > -- even if we weren't able to allocate the unmapped area at the hint
> > address in the end.
> >
> > This was done to address the immediate regression in anonymous mappings
> > where the hint address were being ignored in some cases; due to commit
> > efa7df3e3bb5 ("mm: align larger anonymous mappings on THP boundaries").
> >
> > It was later pointed out that this issue also existed for file-backed
> > mappings from file systems that use thp_get_unmapped_area() for their
> > .get_unmapped_area() file operation.
> >
> > The same fix was not applied for file-backed mappings since it would
> > mean any mmap requests that provide a hint address would be only
> > PAGE_SIZE-aligned regardless of whether allocation was successful at
> > the hint address or not.
> >
> > Instead, use arch_mmap_hint() to first attempt allocation at the hint
> > address and fallback to THP alignment if that fails.
>
> Thanks for taking time to try to fix this.
>
> >
> > Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
> > ---
> >  mm/huge_memory.c | 15 ++++++++-------
> >  mm/mmap.c        |  1 -
> >  2 files changed, 8 insertions(+), 8 deletions(-)
> >
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 137abeda8602..f070c89dafc9 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -1097,6 +1097,14 @@ static unsigned long __thp_get_unmapped_area(struct file *filp,
> >         loff_t off_align = round_up(off, size);
> >         unsigned long len_pad, ret, off_sub;
> >
> > +       /*
> > +        * If allocation at the address hint succeeds; respect the hint and
> > +        * don't try to align to THP boundary.
> > +        */
> > +       addr = arch_mmap_hint(filp, addr, len, off, flags);
> > +       if (addr)
> > +               return addr;
> > +

Hi Yang,

Thanks for the comments.

>
> IIUC, arch_mmap_hint() will be called in arch_get_unmapped_area() and
> arch_get_unmapped_area_topdown() again. So we will actually look up
> maple tree twice. It sounds like the second hint address search is
> pointless. You should be able to set addr to 0 before calling
> mm_get_unmapped_area_vmflags() in order to skip the second hint
> address search.

You are right that it would call into arch_mmap_hint() twice but it
only attempts the lookup once since on the second attempt addr == 0.

Thanks,
Kalesh
>
> >         if (!IS_ENABLED(CONFIG_64BIT) || in_compat_syscall())
> >                 return 0;
> >
> > @@ -1117,13 +1125,6 @@ static unsigned long __thp_get_unmapped_area(struct file *filp,
> >         if (IS_ERR_VALUE(ret))
> >                 return 0;
> >
> > -       /*
> > -        * Do not try to align to THP boundary if allocation at the address
> > -        * hint succeeds.
> > -        */
> > -       if (ret == addr)
> > -               return addr;
> > -
> >         off_sub = (off - ret) & (size - 1);
> >
> >         if (test_bit(MMF_TOPDOWN, &current->mm->flags) && !off_sub)
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index 59bf7d127aa1..6bfeec80152a 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -807,7 +807,6 @@ __get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
> >         if (get_area) {
> >                 addr = get_area(file, addr, len, pgoff, flags);
> >         } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && !file
> > -                  && !addr /* no hint */
> >                    && IS_ALIGNED(len, PMD_SIZE)) {
> >                 /* Ensures that larger anonymous mappings are THP aligned. */
> >                 addr = thp_get_unmapped_area_vmflags(file, addr, len,
> > --
> > 2.47.0.338.g60cca15819-goog
> >
> >


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH mm-unstable 01/17] mm: Introduce generic_mmap_hint()
  2024-12-10  3:27   ` Yang Shi
@ 2024-12-10 17:55     ` Kalesh Singh
  0 siblings, 0 replies; 23+ messages in thread
From: Kalesh Singh @ 2024-12-10 17:55 UTC (permalink / raw)
  To: Yang Shi
  Cc: akpm, vbabka, yang, riel, david, linux, tsbogend,
	James.Bottomley, ysato, dalias, glaubitz, davem, andreas, tglx,
	bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas, jason.andryuk,
	leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm

On Mon, Dec 9, 2024 at 7:27 PM Yang Shi <shy828301@gmail.com> wrote:
>
> On Mon, Dec 9, 2024 at 6:41 PM Kalesh Singh <kaleshsingh@google.com> wrote:
> >
> > Consolidate the hint searches from both direcitons (topdown and
> > bottomup) into generic_mmap_hint().
> >
> > No functional change is introduced.
> >
> > Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
> > ---
> >  include/linux/sched/mm.h |  4 ++++
> >  mm/mmap.c                | 45 ++++++++++++++++++++++++----------------
> >  2 files changed, 31 insertions(+), 18 deletions(-)
> >
> > diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
> > index 928a626725e6..edeec19d1708 100644
> > --- a/include/linux/sched/mm.h
> > +++ b/include/linux/sched/mm.h
> > @@ -201,6 +201,10 @@ unsigned long mm_get_unmapped_area_vmflags(struct mm_struct *mm,
> >                                            unsigned long flags,
> >                                            vm_flags_t vm_flags);
> >
> > +unsigned long generic_mmap_hint(struct file *filp, unsigned long addr,
> > +                               unsigned long len, unsigned long pgoff,
> > +                               unsigned long flags);
> > +
> >  unsigned long
> >  generic_get_unmapped_area(struct file *filp, unsigned long addr,
> >                           unsigned long len, unsigned long pgoff,
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index df9154b15ef9..e97eb8bf4889 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -620,6 +620,27 @@ unsigned long vm_unmapped_area(struct vm_unmapped_area_info *info)
> >         return addr;
> >  }
> >
> > +unsigned long generic_mmap_hint(struct file *filp, unsigned long addr,
> > +                               unsigned long len, unsigned long pgoff,
> > +                               unsigned long flags)
> > +{
> > +       struct mm_struct *mm = current->mm;
> > +       struct vm_area_struct *vma, *prev;
> > +       const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags);
> > +
> > +       if (!addr)
> > +               return 0;
> > +
> > +       addr = PAGE_ALIGN(addr);
> > +       vma = find_vma_prev(mm, addr, &prev);
> > +       if (mmap_end - len >= addr && addr >= mmap_min_addr &&
> > +           (!vma || addr + len <= vm_start_gap(vma)) &&
> > +           (!prev || addr >= vm_end_gap(prev)))
> > +               return addr;
> > +
> > +       return 0;
> > +}
> > +
> >  /* Get an address range which is currently unmapped.
> >   * For shmat() with addr=0.
> >   *
> > @@ -637,7 +658,6 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr,
> >                           unsigned long flags, vm_flags_t vm_flags)
> >  {
> >         struct mm_struct *mm = current->mm;
> > -       struct vm_area_struct *vma, *prev;
> >         struct vm_unmapped_area_info info = {};
> >         const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags);
> >
> > @@ -647,14 +667,9 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr,
> >         if (flags & MAP_FIXED)
> >                 return addr;
>
> It seems you also can move the MAP_FIXED case into generic_mmap_hint(), right?

I think that could be done too. we'll need a new name :) Let me take a
look at it ...

-- Kalesh

>
> >
> > -       if (addr) {
> > -               addr = PAGE_ALIGN(addr);
> > -               vma = find_vma_prev(mm, addr, &prev);
> > -               if (mmap_end - len >= addr && addr >= mmap_min_addr &&
> > -                   (!vma || addr + len <= vm_start_gap(vma)) &&
> > -                   (!prev || addr >= vm_end_gap(prev)))
> > -                       return addr;
> > -       }
> > +       addr = generic_mmap_hint(filp, addr, len, pgoff, flags);
> > +       if (addr)
> > +               return addr;
> >
> >         info.length = len;
> >         info.low_limit = mm->mmap_base;
> > @@ -685,7 +700,6 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
> >                                   unsigned long len, unsigned long pgoff,
> >                                   unsigned long flags, vm_flags_t vm_flags)
> >  {
> > -       struct vm_area_struct *vma, *prev;
> >         struct mm_struct *mm = current->mm;
> >         struct vm_unmapped_area_info info = {};
> >         const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags);
> > @@ -698,14 +712,9 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
> >                 return addr;
> >
> >         /* requesting a specific address */
> > -       if (addr) {
> > -               addr = PAGE_ALIGN(addr);
> > -               vma = find_vma_prev(mm, addr, &prev);
> > -               if (mmap_end - len >= addr && addr >= mmap_min_addr &&
> > -                               (!vma || addr + len <= vm_start_gap(vma)) &&
> > -                               (!prev || addr >= vm_end_gap(prev)))
> > -                       return addr;
> > -       }
> > +       addr = generic_mmap_hint(filp, addr, len, pgoff, flags);
> > +       if (addr)
> > +               return addr;
> >
> >         info.flags = VM_UNMAPPED_AREA_TOPDOWN;
> >         info.length = len;
> > --
> > 2.47.0.338.g60cca15819-goog
> >
> >


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH mm-unstable 17/17] mm: Respect mmap hint before THP alignment if allocation is possible
  2024-12-10 17:34     ` Kalesh Singh
@ 2024-12-10 19:39       ` Yang Shi
  0 siblings, 0 replies; 23+ messages in thread
From: Yang Shi @ 2024-12-10 19:39 UTC (permalink / raw)
  To: Kalesh Singh
  Cc: akpm, vbabka, yang, riel, david, linux, tsbogend,
	James.Bottomley, ysato, dalias, glaubitz, davem, andreas, tglx,
	bp, dave.hansen, x86, chris, jcmvbkbc, bhelgaas, jason.andryuk,
	leitao, linux-alpha, linux-kernel, linux-snps-arc,
	linux-arm-kernel, linux-csky, loongarch, linux-mips,
	linux-parisc, linuxppc-dev, linux-s390, linux-sh, sparclinux,
	linux-mm, kernel-team, android-mm

On Tue, Dec 10, 2024 at 9:34 AM Kalesh Singh <kaleshsingh@google.com> wrote:
>
> On Mon, Dec 9, 2024 at 7:37 PM Yang Shi <shy828301@gmail.com> wrote:
> >
> > On Mon, Dec 9, 2024 at 6:45 PM Kalesh Singh <kaleshsingh@google.com> wrote:
> > >
> > > Commit 249608ee4713 ("mm: respect mmap hint address when aligning for THP")
> > > fallsback to PAGE_SIZE alignment instead of THP alignment
> > > for anonymous mapping as long as a hint address is provided by the user
> > > -- even if we weren't able to allocate the unmapped area at the hint
> > > address in the end.
> > >
> > > This was done to address the immediate regression in anonymous mappings
> > > where the hint address were being ignored in some cases; due to commit
> > > efa7df3e3bb5 ("mm: align larger anonymous mappings on THP boundaries").
> > >
> > > It was later pointed out that this issue also existed for file-backed
> > > mappings from file systems that use thp_get_unmapped_area() for their
> > > .get_unmapped_area() file operation.
> > >
> > > The same fix was not applied for file-backed mappings since it would
> > > mean any mmap requests that provide a hint address would be only
> > > PAGE_SIZE-aligned regardless of whether allocation was successful at
> > > the hint address or not.
> > >
> > > Instead, use arch_mmap_hint() to first attempt allocation at the hint
> > > address and fallback to THP alignment if that fails.
> >
> > Thanks for taking time to try to fix this.
> >
> > >
> > > Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
> > > ---
> > >  mm/huge_memory.c | 15 ++++++++-------
> > >  mm/mmap.c        |  1 -
> > >  2 files changed, 8 insertions(+), 8 deletions(-)
> > >
> > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > > index 137abeda8602..f070c89dafc9 100644
> > > --- a/mm/huge_memory.c
> > > +++ b/mm/huge_memory.c
> > > @@ -1097,6 +1097,14 @@ static unsigned long __thp_get_unmapped_area(struct file *filp,
> > >         loff_t off_align = round_up(off, size);
> > >         unsigned long len_pad, ret, off_sub;
> > >
> > > +       /*
> > > +        * If allocation at the address hint succeeds; respect the hint and
> > > +        * don't try to align to THP boundary.
> > > +        */
> > > +       addr = arch_mmap_hint(filp, addr, len, off, flags);
> > > +       if (addr)
> > > +               return addr;
> > > +
>
> Hi Yang,
>
> Thanks for the comments.
>
> >
> > IIUC, arch_mmap_hint() will be called in arch_get_unmapped_area() and
> > arch_get_unmapped_area_topdown() again. So we will actually look up
> > maple tree twice. It sounds like the second hint address search is
> > pointless. You should be able to set addr to 0 before calling
> > mm_get_unmapped_area_vmflags() in order to skip the second hint
> > address search.
>
> You are right that it would call into arch_mmap_hint() twice but it
> only attempts the lookup once since on the second attempt addr == 0.

Aha, yeah, I missed addr is going to be reset if arch_mmap_hint()
fails to find a suitable area.

>
> Thanks,
> Kalesh
> >
> > >         if (!IS_ENABLED(CONFIG_64BIT) || in_compat_syscall())
> > >                 return 0;
> > >
> > > @@ -1117,13 +1125,6 @@ static unsigned long __thp_get_unmapped_area(struct file *filp,
> > >         if (IS_ERR_VALUE(ret))
> > >                 return 0;
> > >
> > > -       /*
> > > -        * Do not try to align to THP boundary if allocation at the address
> > > -        * hint succeeds.
> > > -        */
> > > -       if (ret == addr)
> > > -               return addr;
> > > -
> > >         off_sub = (off - ret) & (size - 1);
> > >
> > >         if (test_bit(MMF_TOPDOWN, &current->mm->flags) && !off_sub)
> > > diff --git a/mm/mmap.c b/mm/mmap.c
> > > index 59bf7d127aa1..6bfeec80152a 100644
> > > --- a/mm/mmap.c
> > > +++ b/mm/mmap.c
> > > @@ -807,7 +807,6 @@ __get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
> > >         if (get_area) {
> > >                 addr = get_area(file, addr, len, pgoff, flags);
> > >         } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && !file
> > > -                  && !addr /* no hint */
> > >                    && IS_ALIGNED(len, PMD_SIZE)) {
> > >                 /* Ensures that larger anonymous mappings are THP aligned. */
> > >                 addr = thp_get_unmapped_area_vmflags(file, addr, len,
> > > --
> > > 2.47.0.338.g60cca15819-goog
> > >
> > >


^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2024-12-10 19:40 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-12-10  2:41 [PATCH mm-unstable 00/17] mm: Introduce arch_mmap_hint() Kalesh Singh
2024-12-10  2:41 ` [PATCH mm-unstable 01/17] mm: Introduce generic_mmap_hint() Kalesh Singh
2024-12-10  3:27   ` Yang Shi
2024-12-10 17:55     ` Kalesh Singh
2024-12-10  2:41 ` [PATCH mm-unstable 02/17] mm: x86: Introduce arch_mmap_hint() Kalesh Singh
2024-12-10  2:41 ` [PATCH mm-unstable 03/17] mm: arm: " Kalesh Singh
2024-12-10  2:41 ` [PATCH mm-unstable 04/17] mm: alpha: " Kalesh Singh
2024-12-10  2:41 ` [PATCH mm-unstable 05/17] mm: arc: Use generic_mmap_hint() Kalesh Singh
2024-12-10  2:41 ` [PATCH mm-unstable 06/17] mm: csky: Introduce arch_mmap_hint() Kalesh Singh
2024-12-10  2:41 ` [PATCH mm-unstable 07/17] mm: loongarch: " Kalesh Singh
2024-12-10  2:41 ` [PATCH mm-unstable 08/17] mm: mips: Introduce arch_align_mmap_hint() Kalesh Singh
2024-12-10  2:41 ` [PATCH mm-unstable 09/17] mm: parisc: " Kalesh Singh
2024-12-10  2:41 ` [PATCH mm-unstable 10/17] mm: s390: Introduce arch_mmap_hint() Kalesh Singh
2024-12-10  2:41 ` [PATCH mm-unstable 11/17] mm: sh: " Kalesh Singh
2024-12-10  2:41 ` [PATCH mm-unstable 12/17] mm: sparc32: " Kalesh Singh
2024-12-10  2:41 ` [PATCH mm-unstable 13/17] mm: sparc64: " Kalesh Singh
2024-12-10  2:41 ` [PATCH mm-unstable 14/17] mm: xtensa: " Kalesh Singh
2024-12-10  2:41 ` [PATCH mm-unstable 15/17] mm: powerpc: " Kalesh Singh
2024-12-10  2:41 ` [PATCH mm-unstable 16/17] mm: Fallback to generic_mmap_hint() Kalesh Singh
2024-12-10  2:41 ` [PATCH mm-unstable 17/17] mm: Respect mmap hint before THP alignment if allocation is possible Kalesh Singh
2024-12-10  3:37   ` Yang Shi
2024-12-10 17:34     ` Kalesh Singh
2024-12-10 19:39       ` Yang Shi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox