linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: memory: use nth_page() in clear/copy_subpage()
@ 2023-12-29  8:22 Kefeng Wang
  2023-12-29  8:49 ` Matthew Wilcox
  0 siblings, 1 reply; 6+ messages in thread
From: Kefeng Wang @ 2023-12-29  8:22 UTC (permalink / raw)
  To: Andrew Morton; +Cc: ziy, linux-mm, Kefeng Wang

The clear and copy of huge gigantic page has converted to use nth_page()
to handle the possible discontinuous struct page(SPARSEMEM without VMEMMAP),
but not change for the non-gigantic part, fix it too.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/memory.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 5c757fba8858..173b9b696230 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6039,7 +6039,7 @@ static int clear_subpage(unsigned long addr, int idx, void *arg)
 {
 	struct page *page = arg;
 
-	clear_user_highpage(page + idx, addr);
+	clear_user_highpage(nth_page(page, idx), addr);
 	return 0;
 }
 
@@ -6089,10 +6089,11 @@ struct copy_subpage_arg {
 static int copy_subpage(unsigned long addr, int idx, void *arg)
 {
 	struct copy_subpage_arg *copy_arg = arg;
+	struct page *dst = nth_page(copy_arg->dst, idx);
+	struct page *src = nth_page(copy_arg->src, idx);
 
-	if (copy_mc_user_highpage(copy_arg->dst + idx, copy_arg->src + idx,
-				  addr, copy_arg->vma)) {
-		memory_failure_queue(page_to_pfn(copy_arg->src + idx), 0);
+	if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma)) {
+		memory_failure_queue(page_to_pfn(src), 0);
 		return -EHWPOISON;
 	}
 	return 0;
-- 
2.41.0



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm: memory: use nth_page() in clear/copy_subpage()
  2023-12-29  8:22 [PATCH] mm: memory: use nth_page() in clear/copy_subpage() Kefeng Wang
@ 2023-12-29  8:49 ` Matthew Wilcox
  2023-12-29 11:15   ` David Hildenbrand
  0 siblings, 1 reply; 6+ messages in thread
From: Matthew Wilcox @ 2023-12-29  8:49 UTC (permalink / raw)
  To: Kefeng Wang; +Cc: Andrew Morton, ziy, linux-mm

On Fri, Dec 29, 2023 at 04:22:07PM +0800, Kefeng Wang wrote:
> The clear and copy of huge gigantic page has converted to use nth_page()
> to handle the possible discontinuous struct page(SPARSEMEM without VMEMMAP),
> but not change for the non-gigantic part, fix it too.

Can there be discontiguities within a non-gigantic huge page?  My
impression was that you can't have a discontiguity at such a small
boundary.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm: memory: use nth_page() in clear/copy_subpage()
  2023-12-29  8:49 ` Matthew Wilcox
@ 2023-12-29 11:15   ` David Hildenbrand
  2024-01-02  6:53     ` Kefeng Wang
  0 siblings, 1 reply; 6+ messages in thread
From: David Hildenbrand @ 2023-12-29 11:15 UTC (permalink / raw)
  To: Matthew Wilcox, Kefeng Wang; +Cc: Andrew Morton, ziy, linux-mm

On 29.12.23 09:49, Matthew Wilcox wrote:
> On Fri, Dec 29, 2023 at 04:22:07PM +0800, Kefeng Wang wrote:
>> The clear and copy of huge gigantic page has converted to use nth_page()
>> to handle the possible discontinuous struct page(SPARSEMEM without VMEMMAP),
>> but not change for the non-gigantic part, fix it too.
> 
> Can there be discontiguities within a non-gigantic huge page?  My
> impression was that you can't have a discontiguity at such a small
> boundary.

No, we can't. MAX_ORDER allocations from the buddy always completely fit 
into a memory section.

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm: memory: use nth_page() in clear/copy_subpage()
  2023-12-29 11:15   ` David Hildenbrand
@ 2024-01-02  6:53     ` Kefeng Wang
  2024-01-02 16:11       ` David Hildenbrand
  0 siblings, 1 reply; 6+ messages in thread
From: Kefeng Wang @ 2024-01-02  6:53 UTC (permalink / raw)
  To: David Hildenbrand, Matthew Wilcox; +Cc: Andrew Morton, ziy, linux-mm



On 2023/12/29 19:15, David Hildenbrand wrote:
> On 29.12.23 09:49, Matthew Wilcox wrote:
>> On Fri, Dec 29, 2023 at 04:22:07PM +0800, Kefeng Wang wrote:
>>> The clear and copy of huge gigantic page has converted to use nth_page()
>>> to handle the possible discontinuous struct page(SPARSEMEM without 
>>> VMEMMAP),
>>> but not change for the non-gigantic part, fix it too.
>>
>> Can there be discontiguities within a non-gigantic huge page?  My
>> impression was that you can't have a discontiguity at such a small
>> boundary.
> 
> No, we can't. MAX_ORDER allocations from the buddy always completely fit 
> into a memory section.

On ARM64, we have 32M(16*2M) HugeTLB, it maybe not within a mem section,
right?

But after v5.13 commit 782276b4d0ad ("arm64: Force SPARSEMEM_VMEMMAP
as the only memory management model"), it looks only old lts, eg, 5.10 
could meet this issue.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm: memory: use nth_page() in clear/copy_subpage()
  2024-01-02  6:53     ` Kefeng Wang
@ 2024-01-02 16:11       ` David Hildenbrand
  2024-01-03  6:23         ` Kefeng Wang
  0 siblings, 1 reply; 6+ messages in thread
From: David Hildenbrand @ 2024-01-02 16:11 UTC (permalink / raw)
  To: Kefeng Wang, Matthew Wilcox; +Cc: Andrew Morton, ziy, linux-mm

On 02.01.24 07:53, Kefeng Wang wrote:
> 
> 
> On 2023/12/29 19:15, David Hildenbrand wrote:
>> On 29.12.23 09:49, Matthew Wilcox wrote:
>>> On Fri, Dec 29, 2023 at 04:22:07PM +0800, Kefeng Wang wrote:
>>>> The clear and copy of huge gigantic page has converted to use nth_page()
>>>> to handle the possible discontinuous struct page(SPARSEMEM without
>>>> VMEMMAP),
>>>> but not change for the non-gigantic part, fix it too.
>>>
>>> Can there be discontiguities within a non-gigantic huge page?  My
>>> impression was that you can't have a discontiguity at such a small
>>> boundary.
>>
>> No, we can't. MAX_ORDER allocations from the buddy always completely fit
>> into a memory section.
> 
> On ARM64, we have 32M(16*2M) HugeTLB, it maybe not within a mem section,
> right?

I recall the mem sections are always at least 128 MiB on arm64.

Note that anything > MAX_ORDER is called a "gigantic huge page" in 
hugetlb code.


-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm: memory: use nth_page() in clear/copy_subpage()
  2024-01-02 16:11       ` David Hildenbrand
@ 2024-01-03  6:23         ` Kefeng Wang
  0 siblings, 0 replies; 6+ messages in thread
From: Kefeng Wang @ 2024-01-03  6:23 UTC (permalink / raw)
  To: David Hildenbrand, Matthew Wilcox; +Cc: Andrew Morton, ziy, linux-mm



On 2024/1/3 0:11, David Hildenbrand wrote:
> On 02.01.24 07:53, Kefeng Wang wrote:
>>
>>
>> On 2023/12/29 19:15, David Hildenbrand wrote:
>>> On 29.12.23 09:49, Matthew Wilcox wrote:
>>>> On Fri, Dec 29, 2023 at 04:22:07PM +0800, Kefeng Wang wrote:
>>>>> The clear and copy of huge gigantic page has converted to use 
>>>>> nth_page()
>>>>> to handle the possible discontinuous struct page(SPARSEMEM without
>>>>> VMEMMAP),
>>>>> but not change for the non-gigantic part, fix it too.
>>>>
>>>> Can there be discontiguities within a non-gigantic huge page?  My
>>>> impression was that you can't have a discontiguity at such a small
>>>> boundary.
>>>
>>> No, we can't. MAX_ORDER allocations from the buddy always completely fit
>>> into a memory section.
>>
>> On ARM64, we have 32M(16*2M) HugeTLB, it maybe not within a mem section,
>> right?
> 
> I recall the mem sections are always at least 128 MiB on arm64.
> 
> Note that anything > MAX_ORDER is called a "gigantic huge page" in 
> hugetlb code.

Yes, we already check MAX_ORDER, please ignore this one, thanks.

> 
> 


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2024-01-03  6:23 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-29  8:22 [PATCH] mm: memory: use nth_page() in clear/copy_subpage() Kefeng Wang
2023-12-29  8:49 ` Matthew Wilcox
2023-12-29 11:15   ` David Hildenbrand
2024-01-02  6:53     ` Kefeng Wang
2024-01-02 16:11       ` David Hildenbrand
2024-01-03  6:23         ` Kefeng Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox