linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 1/2] mm: use aligned address in clear_gigantic_page()
@ 2024-10-28 14:56 Kefeng Wang
  2024-10-28 14:56 ` [PATCH v3 2/2] mm: use aligned address in copy_user_gigantic_page() Kefeng Wang
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Kefeng Wang @ 2024-10-28 14:56 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Matthew Wilcox, Muchun Song, Huang, Ying,
	linux-mm, Kefeng Wang

In current kernel, hugetlb_no_page() calls folio_zero_user() with the
fault address. Where the fault address may be not aligned with the huge
page size. Then, folio_zero_user() may call clear_gigantic_page() with
the address, while clear_gigantic_page() requires the address to be huge
page size aligned. So, this may cause memory corruption or information
leak, addtional, use more obvious naming 'addr_hint' instead of 'addr'
for clear_gigantic_page().

Fixes: 78fefd04c123 ("mm: memory: convert clear_huge_page() to folio_zero_user()")
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
v3:
- revise patch description, suggested by Huang Ying
- use addr_hint for clear_gigantic_page(), suggested by David
v2: 
- update changelog to clarify the impact, per Andrew

 fs/hugetlbfs/inode.c | 2 +-
 mm/memory.c          | 3 ++-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index a4441fb77f7c..a5ea006f403e 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -825,7 +825,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
 			error = PTR_ERR(folio);
 			goto out;
 		}
-		folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size));
+		folio_zero_user(folio, addr);
 		__folio_mark_uptodate(folio);
 		error = hugetlb_add_to_page_cache(folio, mapping, index);
 		if (unlikely(error)) {
diff --git a/mm/memory.c b/mm/memory.c
index 75c2dfd04f72..84864387f965 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6815,9 +6815,10 @@ static inline int process_huge_page(
 	return 0;
 }
 
-static void clear_gigantic_page(struct folio *folio, unsigned long addr,
+static void clear_gigantic_page(struct folio *folio, unsigned long addr_hint,
 				unsigned int nr_pages)
 {
+	unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(folio));
 	int i;
 
 	might_sleep();
-- 
2.27.0



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v3 2/2] mm: use aligned address in copy_user_gigantic_page()
  2024-10-28 14:56 [PATCH v3 1/2] mm: use aligned address in clear_gigantic_page() Kefeng Wang
@ 2024-10-28 14:56 ` Kefeng Wang
  2024-10-29  9:51   ` David Hildenbrand
  2024-10-29  1:22 ` [PATCH v3 1/2] mm: use aligned address in clear_gigantic_page() Huang, Ying
  2024-10-29  8:42 ` David Hildenbrand
  2 siblings, 1 reply; 5+ messages in thread
From: Kefeng Wang @ 2024-10-28 14:56 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Matthew Wilcox, Muchun Song, Huang, Ying,
	linux-mm, Kefeng Wang

In current kernel, hugetlb_wp() calls copy_user_large_folio() with the
fault address. Where the fault address may be not aligned with the huge
page size. Then, copy_user_large_folio() may call copy_user_gigantic_page()
with the address, while copy_user_gigantic_page() requires the address
to be huge page size aligned. So, this may cause memory corruption or
information leak, addtional, use more obvious naming 'addr_hint' instead
of 'addr' for copy_user_gigantic_page().

Fixes: 530dd9926dc1 ("mm: memory: improve copy_user_large_folio()")
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
v3:
- revise patch description, suggested by Huang Ying
- use addr_hint for copy_user_gigantic_page(), suggested by David
v2: 
- update changelog to clarify the impact, per Andrew

 mm/hugetlb.c | 5 ++---
 mm/memory.c  | 5 +++--
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 2c8c5da0f5d3..15b5d46d49d2 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5338,7 +5338,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 					break;
 				}
 				ret = copy_user_large_folio(new_folio, pte_folio,
-						ALIGN_DOWN(addr, sz), dst_vma);
+							    addr, dst_vma);
 				folio_put(pte_folio);
 				if (ret) {
 					folio_put(new_folio);
@@ -6641,8 +6641,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
 			*foliop = NULL;
 			goto out;
 		}
-		ret = copy_user_large_folio(folio, *foliop,
-					    ALIGN_DOWN(dst_addr, size), dst_vma);
+		ret = copy_user_large_folio(folio, *foliop, dst_addr, dst_vma);
 		folio_put(*foliop);
 		*foliop = NULL;
 		if (ret) {
diff --git a/mm/memory.c b/mm/memory.c
index 84864387f965..209885a4134f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6852,13 +6852,14 @@ void folio_zero_user(struct folio *folio, unsigned long addr_hint)
 }
 
 static int copy_user_gigantic_page(struct folio *dst, struct folio *src,
-				   unsigned long addr,
+				   unsigned long addr_hint,
 				   struct vm_area_struct *vma,
 				   unsigned int nr_pages)
 {
-	int i;
+	unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(dst));
 	struct page *dst_page;
 	struct page *src_page;
+	int i;
 
 	for (i = 0; i < nr_pages; i++) {
 		dst_page = folio_page(dst, i);
-- 
2.27.0



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 1/2] mm: use aligned address in clear_gigantic_page()
  2024-10-28 14:56 [PATCH v3 1/2] mm: use aligned address in clear_gigantic_page() Kefeng Wang
  2024-10-28 14:56 ` [PATCH v3 2/2] mm: use aligned address in copy_user_gigantic_page() Kefeng Wang
@ 2024-10-29  1:22 ` Huang, Ying
  2024-10-29  8:42 ` David Hildenbrand
  2 siblings, 0 replies; 5+ messages in thread
From: Huang, Ying @ 2024-10-29  1:22 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Andrew Morton, David Hildenbrand, Matthew Wilcox, Muchun Song, linux-mm

Kefeng Wang <wangkefeng.wang@huawei.com> writes:

> In current kernel, hugetlb_no_page() calls folio_zero_user() with the
> fault address. Where the fault address may be not aligned with the huge
> page size. Then, folio_zero_user() may call clear_gigantic_page() with
> the address, while clear_gigantic_page() requires the address to be huge
> page size aligned. So, this may cause memory corruption or information
> leak, addtional, use more obvious naming 'addr_hint' instead of 'addr'
> for clear_gigantic_page().
>
> Fixes: 78fefd04c123 ("mm: memory: convert clear_huge_page() to folio_zero_user()")
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
> v3:
> - revise patch description, suggested by Huang Ying
> - use addr_hint for clear_gigantic_page(), suggested by David
> v2: 
> - update changelog to clarify the impact, per Andrew
>
>  fs/hugetlbfs/inode.c | 2 +-
>  mm/memory.c          | 3 ++-
>  2 files changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index a4441fb77f7c..a5ea006f403e 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -825,7 +825,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
>  			error = PTR_ERR(folio);
>  			goto out;
>  		}
> -		folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size));
> +		folio_zero_user(folio, addr);
>  		__folio_mark_uptodate(folio);
>  		error = hugetlb_add_to_page_cache(folio, mapping, index);
>  		if (unlikely(error)) {
> diff --git a/mm/memory.c b/mm/memory.c
> index 75c2dfd04f72..84864387f965 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -6815,9 +6815,10 @@ static inline int process_huge_page(
>  	return 0;
>  }
>  
> -static void clear_gigantic_page(struct folio *folio, unsigned long addr,
> +static void clear_gigantic_page(struct folio *folio, unsigned long addr_hint,
>  				unsigned int nr_pages)
>  {
> +	unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(folio));
>  	int i;
>  
>  	might_sleep();

LGTM.  Thanks for fixing this.  Feel free to add

Reviewed-by: "Huang, Ying" <ying.huang@intel.com>

In the future version.

--
Best Regards,
Huang, Ying


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 1/2] mm: use aligned address in clear_gigantic_page()
  2024-10-28 14:56 [PATCH v3 1/2] mm: use aligned address in clear_gigantic_page() Kefeng Wang
  2024-10-28 14:56 ` [PATCH v3 2/2] mm: use aligned address in copy_user_gigantic_page() Kefeng Wang
  2024-10-29  1:22 ` [PATCH v3 1/2] mm: use aligned address in clear_gigantic_page() Huang, Ying
@ 2024-10-29  8:42 ` David Hildenbrand
  2 siblings, 0 replies; 5+ messages in thread
From: David Hildenbrand @ 2024-10-29  8:42 UTC (permalink / raw)
  To: Kefeng Wang, Andrew Morton
  Cc: Matthew Wilcox, Muchun Song, Huang, Ying, linux-mm

On 28.10.24 15:56, Kefeng Wang wrote:
> In current kernel, hugetlb_no_page() calls folio_zero_user() with the
> fault address. Where the fault address may be not aligned with the huge
> page size. Then, folio_zero_user() may call clear_gigantic_page() with
> the address, while clear_gigantic_page() requires the address to be huge
> page size aligned. So, this may cause memory corruption or information
> leak, addtional, use more obvious naming 'addr_hint' instead of 'addr'
> for clear_gigantic_page().
> 
> Fixes: 78fefd04c123 ("mm: memory: convert clear_huge_page() to folio_zero_user()")
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
> v3:
> - revise patch description, suggested by Huang Ying
> - use addr_hint for clear_gigantic_page(), suggested by David
> v2:
> - update changelog to clarify the impact, per Andrew
> 
>   fs/hugetlbfs/inode.c | 2 +-
>   mm/memory.c          | 3 ++-
>   2 files changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index a4441fb77f7c..a5ea006f403e 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -825,7 +825,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
>   			error = PTR_ERR(folio);
>   			goto out;
>   		}
> -		folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size));
> +		folio_zero_user(folio, addr);
>   		__folio_mark_uptodate(folio);
>   		error = hugetlb_add_to_page_cache(folio, mapping, index);
>   		if (unlikely(error)) {
> diff --git a/mm/memory.c b/mm/memory.c
> index 75c2dfd04f72..84864387f965 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -6815,9 +6815,10 @@ static inline int process_huge_page(
>   	return 0;
>   }
>   
> -static void clear_gigantic_page(struct folio *folio, unsigned long addr,
> +static void clear_gigantic_page(struct folio *folio, unsigned long addr_hint,
>   				unsigned int nr_pages)
>   {
> +	unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(folio));
>   	int i;
>   
>   	might_sleep();


Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 2/2] mm: use aligned address in copy_user_gigantic_page()
  2024-10-28 14:56 ` [PATCH v3 2/2] mm: use aligned address in copy_user_gigantic_page() Kefeng Wang
@ 2024-10-29  9:51   ` David Hildenbrand
  0 siblings, 0 replies; 5+ messages in thread
From: David Hildenbrand @ 2024-10-29  9:51 UTC (permalink / raw)
  To: Kefeng Wang, Andrew Morton
  Cc: Matthew Wilcox, Muchun Song, Huang, Ying, linux-mm

On 28.10.24 15:56, Kefeng Wang wrote:
> In current kernel, hugetlb_wp() calls copy_user_large_folio() with the
> fault address. Where the fault address may be not aligned with the huge
> page size. Then, copy_user_large_folio() may call copy_user_gigantic_page()
> with the address, while copy_user_gigantic_page() requires the address
> to be huge page size aligned. So, this may cause memory corruption or
> information leak, addtional, use more obvious naming 'addr_hint' instead
> of 'addr' for copy_user_gigantic_page().
> 
> Fixes: 530dd9926dc1 ("mm: memory: improve copy_user_large_folio()")
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2024-10-29  9:51 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-28 14:56 [PATCH v3 1/2] mm: use aligned address in clear_gigantic_page() Kefeng Wang
2024-10-28 14:56 ` [PATCH v3 2/2] mm: use aligned address in copy_user_gigantic_page() Kefeng Wang
2024-10-29  9:51   ` David Hildenbrand
2024-10-29  1:22 ` [PATCH v3 1/2] mm: use aligned address in clear_gigantic_page() Huang, Ying
2024-10-29  8:42 ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox