* [RESEND PATCH] mm/hmm: Fix initial PFN for hugetlbfs pages
@ 2019-04-19 23:35 rcampbell
2019-04-22 14:55 ` Jerome Glisse
0 siblings, 1 reply; 2+ messages in thread
From: rcampbell @ 2019-04-19 23:35 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, Ralph Campbell, Jérôme Glisse, Ira Weiny,
John Hubbard, Dan Williams, Arnd Bergmann, Balbir Singh,
Dan Carpenter, Matthew Wilcox, Souptick Joarder, Andrew Morton
From: Ralph Campbell <rcampbell@nvidia.com>
The mmotm patch [1] adds hugetlbfs support for HMM but the initial
PFN used to fill the HMM range->pfns[] array doesn't properly
compute the starting PFN offset.
This can be tested by running test-hugetlbfs-read from [2].
Fix the PFN offset by adjusting the page offset by the device's
page size.
Andrew, this should probably be squashed into Jerome's patch.
[1] https://marc.info/?l=linux-mm&m=155432003506068&w=2
("mm/hmm: mirror hugetlbfs (snapshoting, faulting and DMA mapping)")
[2] https://gitlab.freedesktop.org/glisse/svm-cl-tests
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Cc: Jérôme Glisse <jglisse@redhat.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Balbir Singh <bsingharora@gmail.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Souptick Joarder <jrdr.linux@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
mm/hmm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index def451a56c3e..fcf8e4fb5770 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -868,7 +868,7 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask,
goto unlock;
}
- pfn = pte_pfn(entry) + (start & mask);
+ pfn = pte_pfn(entry) + ((start & mask) >> range->page_shift);
for (; addr < end; addr += size, i++, pfn += pfn_inc)
range->pfns[i] = hmm_device_entry_from_pfn(range, pfn) |
cpu_flags;
--
2.20.1
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [RESEND PATCH] mm/hmm: Fix initial PFN for hugetlbfs pages
2019-04-19 23:35 [RESEND PATCH] mm/hmm: Fix initial PFN for hugetlbfs pages rcampbell
@ 2019-04-22 14:55 ` Jerome Glisse
0 siblings, 0 replies; 2+ messages in thread
From: Jerome Glisse @ 2019-04-22 14:55 UTC (permalink / raw)
To: rcampbell
Cc: linux-mm, linux-kernel, Ira Weiny, John Hubbard, Dan Williams,
Arnd Bergmann, Balbir Singh, Dan Carpenter, Matthew Wilcox,
Souptick Joarder, Andrew Morton
On Fri, Apr 19, 2019 at 04:35:36PM -0700, rcampbell@nvidia.com wrote:
> From: Ralph Campbell <rcampbell@nvidia.com>
>
> The mmotm patch [1] adds hugetlbfs support for HMM but the initial
> PFN used to fill the HMM range->pfns[] array doesn't properly
> compute the starting PFN offset.
> This can be tested by running test-hugetlbfs-read from [2].
>
> Fix the PFN offset by adjusting the page offset by the device's
> page size.
>
> Andrew, this should probably be squashed into Jerome's patch.
>
> [1] https://marc.info/?l=linux-mm&m=155432003506068&w=2
> ("mm/hmm: mirror hugetlbfs (snapshoting, faulting and DMA mapping)")
> [2] https://gitlab.freedesktop.org/glisse/svm-cl-tests
>
> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Good catch.
Reviewed-by: Jérôme Glisse <jglisse@redhat.com>
> Cc: Jérôme Glisse <jglisse@redhat.com>
> Cc: Ira Weiny <ira.weiny@intel.com>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Balbir Singh <bsingharora@gmail.com>
> Cc: Dan Carpenter <dan.carpenter@oracle.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Souptick Joarder <jrdr.linux@gmail.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> ---
> mm/hmm.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/hmm.c b/mm/hmm.c
> index def451a56c3e..fcf8e4fb5770 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -868,7 +868,7 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask,
> goto unlock;
> }
>
> - pfn = pte_pfn(entry) + (start & mask);
> + pfn = pte_pfn(entry) + ((start & mask) >> range->page_shift);
> for (; addr < end; addr += size, i++, pfn += pfn_inc)
> range->pfns[i] = hmm_device_entry_from_pfn(range, pfn) |
> cpu_flags;
> --
> 2.20.1
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2019-04-22 14:55 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-19 23:35 [RESEND PATCH] mm/hmm: Fix initial PFN for hugetlbfs pages rcampbell
2019-04-22 14:55 ` Jerome Glisse
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox