linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/5] Use nth_page() in place of direct struct page manipulation
@ 2023-09-06 15:03 Zi Yan
  2023-09-06 15:03 ` [PATCH v2 1/5] mm/cma: use " Zi Yan
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Zi Yan @ 2023-09-06 15:03 UTC (permalink / raw)
  To: linux-mm, linux-kernel, linux-mips
  Cc: Zi Yan, Andrew Morton, Thomas Bogendoerfer,
	Matthew Wilcox (Oracle),
	David Hildenbrand, Mike Kravetz, Muchun Song, Mike Rapoport (IBM)

From: Zi Yan <ziy@nvidia.com>

On SPARSEMEM without VMEMMAP, struct page is not guaranteed to be
contiguous, since each memory section's memmap might be allocated
independently. hugetlb pages can go beyond a memory section size, thus
direct struct page manipulation on hugetlb pages/subpages might give
wrong struct page. Kernel provides nth_page() to do the manipulation
properly. Use that whenever code can see hugetlb pages.

The patches are on top of next-20230906

Changes from v1:
1. Separated first patch into three and add Fixes for better backport.
2. Collected Reviewed-by.

Zi Yan (5):
  mm/cma: use nth_page() in place of direct struct page manipulation.
  mm/hugetlb: use nth_page() in place of direct struct page
    manipulation.
  mm/memory_hotplug: use nth_page() in place of direct struct page
    manipulation.
  fs: use nth_page() in place of direct struct page manipulation.
  mips: use nth_page() in place of direct struct page manipulation.

 arch/mips/mm/cache.c | 2 +-
 fs/hugetlbfs/inode.c | 4 ++--
 mm/cma.c             | 2 +-
 mm/hugetlb.c         | 2 +-
 mm/memory_hotplug.c  | 2 +-
 5 files changed, 6 insertions(+), 6 deletions(-)

-- 
2.40.1



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 1/5] mm/cma: use nth_page() in place of direct struct page manipulation.
  2023-09-06 15:03 [PATCH v2 0/5] Use nth_page() in place of direct struct page manipulation Zi Yan
@ 2023-09-06 15:03 ` Zi Yan
  2023-09-06 15:03 ` [PATCH v2 2/5] mm/hugetlb: " Zi Yan
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Zi Yan @ 2023-09-06 15:03 UTC (permalink / raw)
  To: linux-mm, linux-kernel, linux-mips
  Cc: Zi Yan, Andrew Morton, Thomas Bogendoerfer,
	Matthew Wilcox (Oracle),
	David Hildenbrand, Mike Kravetz, Muchun Song, Mike Rapoport (IBM),
	stable, Muchun Song

From: Zi Yan <ziy@nvidia.com>

When dealing with hugetlb pages, manipulating struct page pointers
directly can get to wrong struct page, since struct page is not guaranteed
to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle
it properly.

Fixes: 2813b9c02962 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc")
Cc: <stable@vger.kernel.org>
Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/cma.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/cma.c b/mm/cma.c
index da2967c6a223..2b2494fd6b59 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -505,7 +505,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
 	 */
 	if (page) {
 		for (i = 0; i < count; i++)
-			page_kasan_tag_reset(page + i);
+			page_kasan_tag_reset(nth_page(page, i));
 	}
 
 	if (ret && !no_warn) {
-- 
2.40.1



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 2/5] mm/hugetlb: use nth_page() in place of direct struct page manipulation.
  2023-09-06 15:03 [PATCH v2 0/5] Use nth_page() in place of direct struct page manipulation Zi Yan
  2023-09-06 15:03 ` [PATCH v2 1/5] mm/cma: use " Zi Yan
@ 2023-09-06 15:03 ` Zi Yan
  2023-09-06 15:03 ` [PATCH v2 3/5] mm/memory_hotplug: " Zi Yan
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Zi Yan @ 2023-09-06 15:03 UTC (permalink / raw)
  To: linux-mm, linux-kernel, linux-mips
  Cc: Zi Yan, Andrew Morton, Thomas Bogendoerfer,
	Matthew Wilcox (Oracle),
	David Hildenbrand, Mike Kravetz, Muchun Song, Mike Rapoport (IBM),
	stable, Muchun Song

From: Zi Yan <ziy@nvidia.com>

When dealing with hugetlb pages, manipulating struct page pointers
directly can get to wrong struct page, since struct page is not guaranteed
to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle
it properly.

Fixes: 57a196a58421 ("hugetlb: simplify hugetlb handling in follow_page_mask")
Cc: <stable@vger.kernel.org>
Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 2e7188876672..2521cc694fd4 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6489,7 +6489,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
 			}
 		}
 
-		page += ((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
+		page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT));
 
 		/*
 		 * Note that page may be a sub-page, and with vmemmap
-- 
2.40.1



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 3/5] mm/memory_hotplug: use nth_page() in place of direct struct page manipulation.
  2023-09-06 15:03 [PATCH v2 0/5] Use nth_page() in place of direct struct page manipulation Zi Yan
  2023-09-06 15:03 ` [PATCH v2 1/5] mm/cma: use " Zi Yan
  2023-09-06 15:03 ` [PATCH v2 2/5] mm/hugetlb: " Zi Yan
@ 2023-09-06 15:03 ` Zi Yan
  2023-09-06 17:17   ` David Hildenbrand
  2023-09-06 17:46   ` Zi Yan
  2023-09-06 15:03 ` [PATCH v2 4/5] fs: " Zi Yan
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 11+ messages in thread
From: Zi Yan @ 2023-09-06 15:03 UTC (permalink / raw)
  To: linux-mm, linux-kernel, linux-mips
  Cc: Zi Yan, Andrew Morton, Thomas Bogendoerfer,
	Matthew Wilcox (Oracle),
	David Hildenbrand, Mike Kravetz, Muchun Song, Mike Rapoport (IBM),
	stable, Muchun Song

From: Zi Yan <ziy@nvidia.com>

When dealing with hugetlb pages, manipulating struct page pointers
directly can get to wrong struct page, since struct page is not guaranteed
to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle
it properly.

Fixes: eeb0efd071d8 ("mm,memory_hotplug: fix scan_movable_pages() for gigantic hugepages")
Cc: <stable@vger.kernel.org>
Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/memory_hotplug.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 1b03f4ec6fd2..3b301c4023ff 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1689,7 +1689,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
 		 */
 		if (HPageMigratable(head))
 			goto found;
-		skip = compound_nr(head) - (page - head);
+		skip = compound_nr(head) - (pfn - page_to_pfn(head));
 		pfn += skip - 1;
 	}
 	return -ENOENT;
-- 
2.40.1



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 4/5] fs: use nth_page() in place of direct struct page manipulation.
  2023-09-06 15:03 [PATCH v2 0/5] Use nth_page() in place of direct struct page manipulation Zi Yan
                   ` (2 preceding siblings ...)
  2023-09-06 15:03 ` [PATCH v2 3/5] mm/memory_hotplug: " Zi Yan
@ 2023-09-06 15:03 ` Zi Yan
  2023-09-06 15:03 ` [PATCH v2 5/5] mips: " Zi Yan
  2023-09-08 14:46 ` [PATCH v2 0/5] Use " Philippe Mathieu-Daudé
  5 siblings, 0 replies; 11+ messages in thread
From: Zi Yan @ 2023-09-06 15:03 UTC (permalink / raw)
  To: linux-mm, linux-kernel, linux-mips
  Cc: Zi Yan, Andrew Morton, Thomas Bogendoerfer,
	Matthew Wilcox (Oracle),
	David Hildenbrand, Mike Kravetz, Muchun Song, Mike Rapoport (IBM),
	stable, Muchun Song

From: Zi Yan <ziy@nvidia.com>

When dealing with hugetlb pages, struct page is not guaranteed to be
contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle it
properly.

Fixes: 38c1ddbde6c6 ("hugetlbfs: improve read HWPOISON hugepage")
Cc: <stable@vger.kernel.org>
Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
---
 fs/hugetlbfs/inode.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 316c4cebd3f3..60fce26ff937 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -295,7 +295,7 @@ static size_t adjust_range_hwpoison(struct page *page, size_t offset, size_t byt
 	size_t res = 0;
 
 	/* First subpage to start the loop. */
-	page += offset / PAGE_SIZE;
+	page = nth_page(page, offset / PAGE_SIZE);
 	offset %= PAGE_SIZE;
 	while (1) {
 		if (is_raw_hwpoison_page_in_hugepage(page))
@@ -309,7 +309,7 @@ static size_t adjust_range_hwpoison(struct page *page, size_t offset, size_t byt
 			break;
 		offset += n;
 		if (offset == PAGE_SIZE) {
-			page++;
+			page = nth_page(page, 1);
 			offset = 0;
 		}
 	}
-- 
2.40.1



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 5/5] mips: use nth_page() in place of direct struct page manipulation.
  2023-09-06 15:03 [PATCH v2 0/5] Use nth_page() in place of direct struct page manipulation Zi Yan
                   ` (3 preceding siblings ...)
  2023-09-06 15:03 ` [PATCH v2 4/5] fs: " Zi Yan
@ 2023-09-06 15:03 ` Zi Yan
  2023-09-08 14:46 ` [PATCH v2 0/5] Use " Philippe Mathieu-Daudé
  5 siblings, 0 replies; 11+ messages in thread
From: Zi Yan @ 2023-09-06 15:03 UTC (permalink / raw)
  To: linux-mm, linux-kernel, linux-mips
  Cc: Zi Yan, Andrew Morton, Thomas Bogendoerfer,
	Matthew Wilcox (Oracle),
	David Hildenbrand, Mike Kravetz, Muchun Song, Mike Rapoport (IBM),
	stable

From: Zi Yan <ziy@nvidia.com>

__flush_dcache_pages() is called during hugetlb migration via
migrate_pages() -> migrate_hugetlbs() -> unmap_and_move_huge_page()
-> move_to_new_folio() -> flush_dcache_folio(). And with hugetlb and
without sparsemem vmemmap, struct page is not guaranteed to be contiguous
beyond a section. Use nth_page() instead.

Fixes: 15fa3e8e3269 ("mips: implement the new page table range API")
Cc: <stable@vger.kernel.org>
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 arch/mips/mm/cache.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
index 02042100e267..7f830634dbe7 100644
--- a/arch/mips/mm/cache.c
+++ b/arch/mips/mm/cache.c
@@ -117,7 +117,7 @@ void __flush_dcache_pages(struct page *page, unsigned int nr)
 	 * get faulted into the tlb (and thus flushed) anyways.
 	 */
 	for (i = 0; i < nr; i++) {
-		addr = (unsigned long)kmap_local_page(page + i);
+		addr = (unsigned long)kmap_local_page(nth_page(page, i));
 		flush_data_cache_page(addr);
 		kunmap_local((void *)addr);
 	}
-- 
2.40.1



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/5] mm/memory_hotplug: use nth_page() in place of direct struct page manipulation.
  2023-09-06 15:03 ` [PATCH v2 3/5] mm/memory_hotplug: " Zi Yan
@ 2023-09-06 17:17   ` David Hildenbrand
  2023-09-06 17:39     ` Zi Yan
  2023-09-06 17:46   ` Zi Yan
  1 sibling, 1 reply; 11+ messages in thread
From: David Hildenbrand @ 2023-09-06 17:17 UTC (permalink / raw)
  To: Zi Yan, linux-mm, linux-kernel, linux-mips
  Cc: Andrew Morton, Thomas Bogendoerfer, Matthew Wilcox (Oracle),
	Mike Kravetz, Muchun Song, Mike Rapoport (IBM),
	stable, Muchun Song

On 06.09.23 17:03, Zi Yan wrote:
> From: Zi Yan <ziy@nvidia.com>

Subject talks about "nth_page()" but that's not what this patch does.

> 
> When dealing with hugetlb pages, manipulating struct page pointers
> directly can get to wrong struct page, since struct page is not guaranteed
> to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle
> it properly.

^ dito

> 
> Fixes: eeb0efd071d8 ("mm,memory_hotplug: fix scan_movable_pages() for gigantic hugepages")
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Muchun Song <songmuchun@bytedance.com>
> ---
>   mm/memory_hotplug.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 1b03f4ec6fd2..3b301c4023ff 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1689,7 +1689,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
>   		 */
>   		if (HPageMigratable(head))
>   			goto found;
> -		skip = compound_nr(head) - (page - head);
> +		skip = compound_nr(head) - (pfn - page_to_pfn(head));
>   		pfn += skip - 1;
>   	}
>   	return -ENOENT;

I suspect systems without VMEMMAP also don't usually support gigantic 
pages AND hotunplug :)

With the subject+description fixed

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/5] mm/memory_hotplug: use nth_page() in place of direct struct page manipulation.
  2023-09-06 17:17   ` David Hildenbrand
@ 2023-09-06 17:39     ` Zi Yan
  0 siblings, 0 replies; 11+ messages in thread
From: Zi Yan @ 2023-09-06 17:39 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-mm, linux-kernel, linux-mips, Andrew Morton,
	Thomas Bogendoerfer, "Matthew Wilcox (Oracle)",
	Mike Kravetz, Muchun Song, "Mike Rapoport (IBM)",
	stable, Muchun Song

[-- Attachment #1: Type: text/plain, Size: 1529 bytes --]

On 6 Sep 2023, at 13:17, David Hildenbrand wrote:

> On 06.09.23 17:03, Zi Yan wrote:
>> From: Zi Yan <ziy@nvidia.com>
>
> Subject talks about "nth_page()" but that's not what this patch does.
>
>>
>> When dealing with hugetlb pages, manipulating struct page pointers
>> directly can get to wrong struct page, since struct page is not guaranteed
>> to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle
>> it properly.
>
> ^ dito
>
>>
>> Fixes: eeb0efd071d8 ("mm,memory_hotplug: fix scan_movable_pages() for gigantic hugepages")
>> Cc: <stable@vger.kernel.org>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> Reviewed-by: Muchun Song <songmuchun@bytedance.com>
>> ---
>>   mm/memory_hotplug.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>> index 1b03f4ec6fd2..3b301c4023ff 100644
>> --- a/mm/memory_hotplug.c
>> +++ b/mm/memory_hotplug.c
>> @@ -1689,7 +1689,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
>>   		 */
>>   		if (HPageMigratable(head))
>>   			goto found;
>> -		skip = compound_nr(head) - (page - head);
>> +		skip = compound_nr(head) - (pfn - page_to_pfn(head));
>>   		pfn += skip - 1;
>>   	}
>>   	return -ENOENT;
>
> I suspect systems without VMEMMAP also don't usually support gigantic pages AND hotunplug :)
>
> With the subject+description fixed
>
> Acked-by: David Hildenbrand <david@redhat.com>

Sure. Thanks.


--
Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/5] mm/memory_hotplug: use nth_page() in place of direct struct page manipulation.
  2023-09-06 15:03 ` [PATCH v2 3/5] mm/memory_hotplug: " Zi Yan
  2023-09-06 17:17   ` David Hildenbrand
@ 2023-09-06 17:46   ` Zi Yan
  1 sibling, 0 replies; 11+ messages in thread
From: Zi Yan @ 2023-09-06 17:46 UTC (permalink / raw)
  To: linux-mm, linux-kernel, linux-mips
  Cc: Zi Yan, Andrew Morton, Thomas Bogendoerfer,
	"Matthew Wilcox (Oracle)",
	David Hildenbrand, Mike Kravetz, Muchun Song,
	"Mike Rapoport (IBM)",
	stable, Muchun Song

[-- Attachment #1: Type: text/plain, Size: 2524 bytes --]

On 6 Sep 2023, at 11:03, Zi Yan wrote:

> From: Zi Yan <ziy@nvidia.com>
>
> When dealing with hugetlb pages, manipulating struct page pointers
> directly can get to wrong struct page, since struct page is not guaranteed
> to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle
> it properly.
>
> Fixes: eeb0efd071d8 ("mm,memory_hotplug: fix scan_movable_pages() for gigantic hugepages")
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  mm/memory_hotplug.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 1b03f4ec6fd2..3b301c4023ff 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1689,7 +1689,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
>  		 */
>  		if (HPageMigratable(head))
>  			goto found;
> -		skip = compound_nr(head) - (page - head);
> +		skip = compound_nr(head) - (pfn - page_to_pfn(head));
>  		pfn += skip - 1;
>  	}
>  	return -ENOENT;
> -- 
> 2.40.1

Fixed the subject and commit message:

From: Zi Yan <ziy@nvidia.com>
Date: Wed, 6 Sep 2023 10:51:21 -0400
Subject: [PATCH v2 3/5] mm/memory_hotplug: use pfn calculation in place of direct struct page manipulation.

When dealing with hugetlb pages, manipulating struct page pointers
directly can get to wrong struct page, since struct page is not guaranteed
to be contiguous on SPARSEMEM without VMEMMAP. Use pfn calculation to
handle it properly.

Fixes: eeb0efd071d8 ("mm,memory_hotplug: fix scan_movable_pages() for gigantic hugepages")
Cc: <stable@vger.kernel.org>
Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: David Hildenbrand <david@redhat.com>
---
 mm/memory_hotplug.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 1b03f4ec6fd2..3b301c4023ff 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1689,7 +1689,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
                 */
                if (HPageMigratable(head))
                        goto found;
-               skip = compound_nr(head) - (page - head);
+               skip = compound_nr(head) - (pfn - page_to_pfn(head));
                pfn += skip - 1;
        }
        return -ENOENT;
--
2.40.1


--
Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/5] Use nth_page() in place of direct struct page manipulation
  2023-09-06 15:03 [PATCH v2 0/5] Use nth_page() in place of direct struct page manipulation Zi Yan
                   ` (4 preceding siblings ...)
  2023-09-06 15:03 ` [PATCH v2 5/5] mips: " Zi Yan
@ 2023-09-08 14:46 ` Philippe Mathieu-Daudé
  2023-09-08 14:56   ` Zi Yan
  5 siblings, 1 reply; 11+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-09-08 14:46 UTC (permalink / raw)
  To: Zi Yan, linux-mm, linux-kernel, linux-mips
  Cc: Andrew Morton, Thomas Bogendoerfer, Matthew Wilcox (Oracle),
	David Hildenbrand, Mike Kravetz, Muchun Song, Mike Rapoport (IBM)

Hi,

On 6/9/23 17:03, Zi Yan wrote:
> From: Zi Yan <ziy@nvidia.com>
> 
> On SPARSEMEM without VMEMMAP, struct page is not guaranteed to be
> contiguous, since each memory section's memmap might be allocated
> independently. hugetlb pages can go beyond a memory section size, thus
> direct struct page manipulation on hugetlb pages/subpages might give
> wrong struct page. Kernel provides nth_page() to do the manipulation
> properly. Use that whenever code can see hugetlb pages.

How can we notice "whenever code can see hugetlb pages"?
 From your series it seems you did a manual code audit, is that correct?
(I ask because I'm wondering about code scalability and catching other
cases).

Thanks,

Phil.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/5] Use nth_page() in place of direct struct page manipulation
  2023-09-08 14:46 ` [PATCH v2 0/5] Use " Philippe Mathieu-Daudé
@ 2023-09-08 14:56   ` Zi Yan
  0 siblings, 0 replies; 11+ messages in thread
From: Zi Yan @ 2023-09-08 14:56 UTC (permalink / raw)
  To: "Philippe Mathieu-Daudé"
  Cc: linux-mm, linux-kernel, linux-mips, Andrew Morton,
	Thomas Bogendoerfer, "Matthew Wilcox (Oracle)",
	David Hildenbrand, Mike Kravetz, Muchun Song,
	"Mike Rapoport (IBM)"

[-- Attachment #1: Type: text/plain, Size: 1345 bytes --]

On 8 Sep 2023, at 10:46, Philippe Mathieu-Daudé wrote:

> Hi,
>
> On 6/9/23 17:03, Zi Yan wrote:
>> From: Zi Yan <ziy@nvidia.com>
>>
>> On SPARSEMEM without VMEMMAP, struct page is not guaranteed to be
>> contiguous, since each memory section's memmap might be allocated
>> independently. hugetlb pages can go beyond a memory section size, thus
>> direct struct page manipulation on hugetlb pages/subpages might give
>> wrong struct page. Kernel provides nth_page() to do the manipulation
>> properly. Use that whenever code can see hugetlb pages.
>
> How can we notice "whenever code can see hugetlb pages"?
> From your series it seems you did a manual code audit, is that correct?
> (I ask because I'm wondering about code scalability and catching other
> cases).

Anything allocated from buddy allocator should be free of this problem,
because MAX_ORDER is always smaller than a memory section size. This means
majority of kernel code should be fine. What is left is core mm code that
can have a chance to touch hugetlb, like migration, memory compaction,
and of course hugetlb code. Yes, I did a manual code audit. And hopefully
I caught all cases.

An alternative is to use nth_page() everywhere, but that is a very invasive
change for an uncommon config (SPARSEMEM + !VMEMMAP).


--
Best Regards,
Yan, Zi

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2023-09-08 14:56 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-06 15:03 [PATCH v2 0/5] Use nth_page() in place of direct struct page manipulation Zi Yan
2023-09-06 15:03 ` [PATCH v2 1/5] mm/cma: use " Zi Yan
2023-09-06 15:03 ` [PATCH v2 2/5] mm/hugetlb: " Zi Yan
2023-09-06 15:03 ` [PATCH v2 3/5] mm/memory_hotplug: " Zi Yan
2023-09-06 17:17   ` David Hildenbrand
2023-09-06 17:39     ` Zi Yan
2023-09-06 17:46   ` Zi Yan
2023-09-06 15:03 ` [PATCH v2 4/5] fs: " Zi Yan
2023-09-06 15:03 ` [PATCH v2 5/5] mips: " Zi Yan
2023-09-08 14:46 ` [PATCH v2 0/5] Use " Philippe Mathieu-Daudé
2023-09-08 14:56   ` Zi Yan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox