linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH resend 1/2] mm: always use base address when clear gigantic page
@ 2024-10-25  0:44 Kefeng Wang
  2024-10-25  0:44 ` [PATCH resend 2/2] mm: always use base address when copy " Kefeng Wang
  2024-10-25 22:56 ` [PATCH resend 1/2] mm: always use base address when clear " Andrew Morton
  0 siblings, 2 replies; 4+ messages in thread
From: Kefeng Wang @ 2024-10-25  0:44 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Matthew Wilcox, Muchun Song, Huang, Ying,
	linux-mm, Kefeng Wang

When clear gigantic page, it zeros page from the first subpage to
the last subpage, that is, aligned base address is needed in it,
and we don't need to aligned down the address in the caller as the
real address will be passed to process_huge_page().

Fixes: 78fefd04c123 ("mm: memory: convert clear_huge_page() to folio_zero_user()")
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 fs/hugetlbfs/inode.c | 2 +-
 mm/memory.c          | 1 +
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index a4441fb77f7c..a5ea006f403e 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -825,7 +825,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
 			error = PTR_ERR(folio);
 			goto out;
 		}
-		folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size));
+		folio_zero_user(folio, addr);
 		__folio_mark_uptodate(folio);
 		error = hugetlb_add_to_page_cache(folio, mapping, index);
 		if (unlikely(error)) {
diff --git a/mm/memory.c b/mm/memory.c
index 48e534aa939c..934ab5fff537 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6802,6 +6802,7 @@ static void clear_gigantic_page(struct folio *folio, unsigned long addr,
 	int i;
 
 	might_sleep();
+	addr = ALIGN_DOWN(addr, folio_size(folio));
 	for (i = 0; i < nr_pages; i++) {
 		cond_resched();
 		clear_user_highpage(folio_page(folio, i), addr + i * PAGE_SIZE);
-- 
2.27.0



^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH resend 2/2] mm: always use base address when copy gigantic page
  2024-10-25  0:44 [PATCH resend 1/2] mm: always use base address when clear gigantic page Kefeng Wang
@ 2024-10-25  0:44 ` Kefeng Wang
  2024-10-25 22:56 ` [PATCH resend 1/2] mm: always use base address when clear " Andrew Morton
  1 sibling, 0 replies; 4+ messages in thread
From: Kefeng Wang @ 2024-10-25  0:44 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Matthew Wilcox, Muchun Song, Huang, Ying,
	linux-mm, Kefeng Wang

When copy gigantic page, it copies page from the first subpage to
the last subpage, that is, aligned base address is needed in it,
and we don't need to aligned down the address in the caller as the
real address will be passed to process_huge_page().

Fixes: 530dd9926dc1 ("mm: memory: improve copy_user_large_folio()")
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/hugetlb.c | 5 ++---
 mm/memory.c  | 1 +
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 906294ac85dc..2674dba12c73 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5338,7 +5338,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 					break;
 				}
 				ret = copy_user_large_folio(new_folio, pte_folio,
-						ALIGN_DOWN(addr, sz), dst_vma);
+							    addr, dst_vma);
 				folio_put(pte_folio);
 				if (ret) {
 					folio_put(new_folio);
@@ -6637,8 +6637,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
 			*foliop = NULL;
 			goto out;
 		}
-		ret = copy_user_large_folio(folio, *foliop,
-					    ALIGN_DOWN(dst_addr, size), dst_vma);
+		ret = copy_user_large_folio(folio, *foliop, dst_addr, dst_vma);
 		folio_put(*foliop);
 		*foliop = NULL;
 		if (ret) {
diff --git a/mm/memory.c b/mm/memory.c
index 934ab5fff537..281c0460c572 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6841,6 +6841,7 @@ static int copy_user_gigantic_page(struct folio *dst, struct folio *src,
 	struct page *dst_page;
 	struct page *src_page;
 
+	addr = ALIGN_DOWN(addr, folio_size(dst));
 	for (i = 0; i < nr_pages; i++) {
 		dst_page = folio_page(dst, i);
 		src_page = folio_page(src, i);
-- 
2.27.0



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH resend 1/2] mm: always use base address when clear gigantic page
  2024-10-25  0:44 [PATCH resend 1/2] mm: always use base address when clear gigantic page Kefeng Wang
  2024-10-25  0:44 ` [PATCH resend 2/2] mm: always use base address when copy " Kefeng Wang
@ 2024-10-25 22:56 ` Andrew Morton
  2024-10-26  1:34   ` Kefeng Wang
  1 sibling, 1 reply; 4+ messages in thread
From: Andrew Morton @ 2024-10-25 22:56 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: David Hildenbrand, Matthew Wilcox, Muchun Song, Huang, Ying, linux-mm

On Fri, 25 Oct 2024 08:44:55 +0800 Kefeng Wang <wangkefeng.wang@huawei.com> wrote:

> When clear gigantic page, it zeros page from the first subpage to
> the last subpage, that is, aligned base address is needed in it,
> and we don't need to aligned down the address in the caller as the
> real address will be passed to process_huge_page().

Matthew just told us that folios con't have subpages
(https://lkml.kernel.org/r/ZxsRCyBSO-C27Uzn@casper.infradead.org).

Please carefully describe the impact of this change.  I think it's
"small cleanup and optimization?"

Also, I find the changelog rather hard to follow.  I think we're adding
the alignment operation to the callee and hence removing it from the
caller?



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH resend 1/2] mm: always use base address when clear gigantic page
  2024-10-25 22:56 ` [PATCH resend 1/2] mm: always use base address when clear " Andrew Morton
@ 2024-10-26  1:34   ` Kefeng Wang
  0 siblings, 0 replies; 4+ messages in thread
From: Kefeng Wang @ 2024-10-26  1:34 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Matthew Wilcox, Muchun Song, Huang, Ying, linux-mm



On 2024/10/26 6:56, Andrew Morton wrote:
> On Fri, 25 Oct 2024 08:44:55 +0800 Kefeng Wang <wangkefeng.wang@huawei.com> wrote:
> 
>> When clear gigantic page, it zeros page from the first subpage to
>> the last subpage, that is, aligned base address is needed in it,
>> and we don't need to aligned down the address in the caller as the
>> real address will be passed to process_huge_page().
> 
> Matthew just told us that folios con't have subpages
> (https://lkml.kernel.org/r/ZxsRCyBSO-C27Uzn@casper.infradead.org).
> 

OK, will change subpage to page.

> Please carefully describe the impact of this change.  I think it's
> "small cleanup and optimization?"
> 
> Also, I find the changelog rather hard to follow.  I think we're adding
> the alignment operation to the callee and hence removing it from the
> caller?
> 

Sorry for the confuse, there is some different between gigantic
page(nr_pages > MAX_ORDER_NR_PAGE) and non-gigantic page,

1) for gigantic page, it always clear/copy page from the fist page to
the last page, see copy_user_gigantic_page/clear_gigantic_page, but if 
directly pass addr_hint which maybe not the address of the first page,
then if arch's code use this addr_hint to flush cache, it may flush the 
wrong cache.

2) for non-gigantic page, it calculate the base address inside, see 
process_huge_page, if we passed the wrong addr_hint, it only has 
performance impact(not sure, but at least no different on arm64), no
function impact.

Will update the change and resend.





^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2024-10-26  1:34 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-25  0:44 [PATCH resend 1/2] mm: always use base address when clear gigantic page Kefeng Wang
2024-10-25  0:44 ` [PATCH resend 2/2] mm: always use base address when copy " Kefeng Wang
2024-10-25 22:56 ` [PATCH resend 1/2] mm: always use base address when clear " Andrew Morton
2024-10-26  1:34   ` Kefeng Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox