linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH] /dev/zero: try to align PMD_SIZE for private mapping
@ 2025-07-30  2:00 zhangqilong
  0 siblings, 0 replies; 5+ messages in thread
From: zhangqilong @ 2025-07-30  2:00 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: arnd, gregkh, linux-kernel, linux-mm, Wangkefeng (OS Kernel Lab),
	Sunnanyong

> 
> On Tue, Jul 29, 2025 at 09:49:41PM +0800, Zhang Qilong wrote:
> > By default, THP are usually enabled. Mapping /dev/zero with a size
> 
> Err... we can't rely on this.

OK, I will update this description in next version.  

> 
> As per below comments on code, I'd update this to say something about
> fallback if it's not.
> 
> > larger than 2MB could achieve performance gains by allocating aligned
> > address. The mprot_tw4m in libMicro average execution time on arm64:
> >   - Test case:        mprot_tw4m
> >   - Before the patch:   22 us
> >   - After the patch:    17 us
> >
> > Signed-off-by: Zhang Qilong <zhangqilong3@huawei.com>
> 
> This looks ok to me because there's a precedent for using
> thp_get_unmapped_area() directly as a file_operations-
> >get_unmapped_area e.g. in ext4.
> 
> We also simply (amusingly, or perhaps not hugely amusingly, rather
> 'uniquely') establish an anonymous mapping on f_op->mmap via
> mmap_zero() using vma_set_anonymous(), so we can rely on the standard
> anon page memory faulting logic to sort out the actual allocation/mapping of
> the huge page via:
> 
> __handle_mm_fault() -> create_huge_pmd() ->
> do_huge_pmd_anonymous_page() etc.
> 
> So everything should 'just work', and fallback if not permitted.
> 
> So in general seems fine.
> 
> > ---
> >  drivers/char/mem.c | 5 +++++
> >  1 file changed, 5 insertions(+)
> >
> > diff --git a/drivers/char/mem.c b/drivers/char/mem.c index
> > 48839958b0b1..c57327ca9dd6 100644
> > --- a/drivers/char/mem.c
> > +++ b/drivers/char/mem.c
> > @@ -515,10 +515,12 @@ static int mmap_zero(struct file *file, struct
> > vm_area_struct *vma)  static unsigned long
> get_unmapped_area_zero(struct file *file,
> >  				unsigned long addr, unsigned long len,
> >  				unsigned long pgoff, unsigned long flags)
> {  #ifdef CONFIG_MMU
> > +	unsigned long ret;
> > +
> >  	if (flags & MAP_SHARED) {
> >  		/*
> >  		 * mmap_zero() will call shmem_zero_setup() to create a file,
> >  		 * so use shmem's get_unmapped_area in case it can be
> huge;
> >  		 * and pass NULL for file as in mmap.c's
> get_unmapped_area(), @@
> > -526,10 +528,13 @@ static unsigned long get_unmapped_area_zero(struct
> file *file,
> >  		 */
> >  		return shmem_get_unmapped_area(NULL, addr, len, pgoff,
> flags);
> >  	}
> >
> >  	/* Otherwise flags & MAP_PRIVATE: with no shmem object beneath
> it */
> 
> Let's add a comment here like:
> 
> 	/*
> 	 * Attempt to map aligned to huge page size if possible, otherwise
> we
>          * fall back to system page size mappings. If THP is not enabled, this
>          * returns NULL and we always fallback.
> 	 */
> 
> I think it'd be sensible to have an #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> here, because thp_get_unmapped_area() does the fallback for you, and
> then otherwise we'd be trying it twice which is weird.
> 
> E.g.:
> 
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> 	return thp_get_unmapped_area(file, addr, len, pgoff, flags); #else
> 	return mm_get_unmapped_area(current->mm, file, addr, len, pgoff,
> flags); #endif
> 

Trying it twice is realy unnecessary. This looks clearer and better, I will refer
to your suggestion in patch V2. Thanks a lot for your and helpful advice.

> > +	ret = thp_get_unmapped_area(file, addr, len, pgoff, flags);
> > +	if (ret)
> > +		return ret;
> >  	return mm_get_unmapped_area(current->mm, file, addr, len, pgoff,
> > flags);  #else
> >  	return -ENOSYS;
> >  #endif
> >  }
> > --
> > 2.43.0
> >
> 
> In _theory_ we should do the thing in mmap() where we check the size is
> PMD-aligned (see __get_unmapped_area()), but I don't think anybody's
> mapping a bunch of /dev/zero mappings next to each other or using them in
> any way where that'd matter... So yeah let's not :)

I agree with your thought. Do not make check here.

Thanks.
Zhang




^ permalink raw reply	[flat|nested] 5+ messages in thread
* [PATCH] /dev/zero: try to align PMD_SIZE for private mapping
@ 2025-07-29 13:49 Zhang Qilong
  2025-07-29 13:58 ` David Hildenbrand
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Zhang Qilong @ 2025-07-29 13:49 UTC (permalink / raw)
  To: arnd, gregkh
  Cc: linux-kernel, linux-mm, wangkefeng.wang, zhangqilong3, sunnanyong

By default, THP are usually enabled. Mapping /dev/zero with a size
larger than 2MB could achieve performance gains by allocating aligned
address. The mprot_tw4m in libMicro average execution time on arm64:
  - Test case:        mprot_tw4m
  - Before the patch:   22 us
  - After the patch:    17 us

Signed-off-by: Zhang Qilong <zhangqilong3@huawei.com>
---
 drivers/char/mem.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/char/mem.c b/drivers/char/mem.c
index 48839958b0b1..c57327ca9dd6 100644
--- a/drivers/char/mem.c
+++ b/drivers/char/mem.c
@@ -515,10 +515,12 @@ static int mmap_zero(struct file *file, struct vm_area_struct *vma)
 static unsigned long get_unmapped_area_zero(struct file *file,
 				unsigned long addr, unsigned long len,
 				unsigned long pgoff, unsigned long flags)
 {
 #ifdef CONFIG_MMU
+	unsigned long ret;
+
 	if (flags & MAP_SHARED) {
 		/*
 		 * mmap_zero() will call shmem_zero_setup() to create a file,
 		 * so use shmem's get_unmapped_area in case it can be huge;
 		 * and pass NULL for file as in mmap.c's get_unmapped_area(),
@@ -526,10 +528,13 @@ static unsigned long get_unmapped_area_zero(struct file *file,
 		 */
 		return shmem_get_unmapped_area(NULL, addr, len, pgoff, flags);
 	}
 
 	/* Otherwise flags & MAP_PRIVATE: with no shmem object beneath it */
+	ret = thp_get_unmapped_area(file, addr, len, pgoff, flags);
+	if (ret)
+		return ret;
 	return mm_get_unmapped_area(current->mm, file, addr, len, pgoff, flags);
 #else
 	return -ENOSYS;
 #endif
 }
-- 
2.43.0



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2025-07-30  3:37 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-07-30  2:00 [PATCH] /dev/zero: try to align PMD_SIZE for private mapping zhangqilong
  -- strict thread matches above, loose matches on Subject: below --
2025-07-29 13:49 Zhang Qilong
2025-07-29 13:58 ` David Hildenbrand
2025-07-29 15:48 ` Lorenzo Stoakes
2025-07-30  3:36 ` kernel test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox