linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] mm/hmm: fault non-owner device private entries
@ 2022-07-25 18:36 Ralph Campbell
  2022-07-25 18:36 ` [PATCH v2 1/2] " Ralph Campbell
  2022-07-25 18:36 ` [PATCH v2 2/2] mm/hmm: add a test for cross device private faults Ralph Campbell
  0 siblings, 2 replies; 10+ messages in thread
From: Ralph Campbell @ 2022-07-25 18:36 UTC (permalink / raw)
  To: linux-mm
  Cc: Felix Kuehling, Philip Yang, Alistair Popple, Jason Gunthorpe,
	Andrew Morton, Ralph Campbell

Changes from v1 to v2:
Made code style changes suggested by Alistair Popple
Added a self test to hmm-tests.c (Jason Gunthorpe)

Ralph Campbell (2):
  mm/hmm: fault non-owner device private entries
  mm/hmm: add a test for cross device private faults

 mm/hmm.c                               | 19 ++++++++-----------
 tools/testing/selftests/vm/hmm-tests.c | 14 ++++++++++++--
 2 files changed, 20 insertions(+), 13 deletions(-)

-- 
2.35.3



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/2] mm/hmm: fault non-owner device private entries
  2022-07-25 18:36 [PATCH v2 0/2] mm/hmm: fault non-owner device private entries Ralph Campbell
@ 2022-07-25 18:36 ` Ralph Campbell
  2022-07-26  1:26   ` Alistair Popple
  2022-07-26 20:59   ` John Hubbard
  2022-07-25 18:36 ` [PATCH v2 2/2] mm/hmm: add a test for cross device private faults Ralph Campbell
  1 sibling, 2 replies; 10+ messages in thread
From: Ralph Campbell @ 2022-07-25 18:36 UTC (permalink / raw)
  To: linux-mm
  Cc: Felix Kuehling, Philip Yang, Alistair Popple, Jason Gunthorpe,
	Andrew Morton, Ralph Campbell, stable

If hmm_range_fault() is called with the HMM_PFN_REQ_FAULT flag and a
device private PTE is found, the hmm_range::dev_private_owner page is
used to determine if the device private page should not be faulted in.
However, if the device private page is not owned by the caller,
hmm_range_fault() returns an error instead of calling migrate_to_ram()
to fault in the page.

Cc: stable@vger.kernel.org
Fixes: 76612d6ce4cc ("mm/hmm: reorganize how !pte_present is handled in hmm_vma_handle_pte()")
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Reported-by: Felix Kuehling <felix.kuehling@amd.com>
---
 mm/hmm.c | 19 ++++++++-----------
 1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/mm/hmm.c b/mm/hmm.c
index 3fd3242c5e50..f2aa63b94d9b 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -212,14 +212,6 @@ int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr,
 		unsigned long end, unsigned long hmm_pfns[], pmd_t pmd);
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
-static inline bool hmm_is_device_private_entry(struct hmm_range *range,
-		swp_entry_t entry)
-{
-	return is_device_private_entry(entry) &&
-		pfn_swap_entry_to_page(entry)->pgmap->owner ==
-		range->dev_private_owner;
-}
-
 static inline unsigned long pte_to_hmm_pfn_flags(struct hmm_range *range,
 						 pte_t pte)
 {
@@ -252,10 +244,12 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
 		swp_entry_t entry = pte_to_swp_entry(pte);
 
 		/*
-		 * Never fault in device private pages, but just report
-		 * the PFN even if not present.
+		 * Don't fault in device private pages owned by the caller,
+		 * just report the PFN.
 		 */
-		if (hmm_is_device_private_entry(range, entry)) {
+		if (is_device_private_entry(entry) &&
+		    pfn_swap_entry_to_page(entry)->pgmap->owner ==
+		    range->dev_private_owner) {
 			cpu_flags = HMM_PFN_VALID;
 			if (is_writable_device_private_entry(entry))
 				cpu_flags |= HMM_PFN_WRITE;
@@ -273,6 +267,9 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
 		if (!non_swap_entry(entry))
 			goto fault;
 
+		if (is_device_private_entry(entry))
+			goto fault;
+
 		if (is_device_exclusive_entry(entry))
 			goto fault;
 
-- 
2.35.3



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 2/2] mm/hmm: add a test for cross device private faults
  2022-07-25 18:36 [PATCH v2 0/2] mm/hmm: fault non-owner device private entries Ralph Campbell
  2022-07-25 18:36 ` [PATCH v2 1/2] " Ralph Campbell
@ 2022-07-25 18:36 ` Ralph Campbell
  2022-07-26  1:38   ` Alistair Popple
  2022-07-26 21:03   ` John Hubbard
  1 sibling, 2 replies; 10+ messages in thread
From: Ralph Campbell @ 2022-07-25 18:36 UTC (permalink / raw)
  To: linux-mm
  Cc: Felix Kuehling, Philip Yang, Alistair Popple, Jason Gunthorpe,
	Andrew Morton, Ralph Campbell

Add a simple test case for when hmm_range_fault() is called with the
HMM_PFN_REQ_FAULT flag and a device private PTE is found for a device
other than the hmm_range::dev_private_owner. This should cause the
page to be faulted back to system memory from the other device and the
PFN returned in the output array.

Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
---
 tools/testing/selftests/vm/hmm-tests.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c
index 203323967b50..a5ce7cc2e7aa 100644
--- a/tools/testing/selftests/vm/hmm-tests.c
+++ b/tools/testing/selftests/vm/hmm-tests.c
@@ -1520,9 +1520,19 @@ TEST_F(hmm2, double_map)
 	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
 		ASSERT_EQ(ptr[i], i);
 
-	/* Punch a hole after the first page address. */
-	ret = munmap(buffer->ptr + self->page_size, self->page_size);
+	/* Migrate pages to device 1 and try to read from device 0. */
+	ret = hmm_dmirror_cmd(self->fd1, HMM_DMIRROR_MIGRATE, buffer, npages);
+	ASSERT_EQ(ret, 0);
+	ASSERT_EQ(buffer->cpages, npages);
+
+	ret = hmm_dmirror_cmd(self->fd0, HMM_DMIRROR_READ, buffer, npages);
 	ASSERT_EQ(ret, 0);
+	ASSERT_EQ(buffer->cpages, npages);
+	ASSERT_EQ(buffer->faults, 1);
+
+	/* Check what device 0 read. */
+	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
+		ASSERT_EQ(ptr[i], i);
 
 	hmm_buffer_free(buffer);
 }
-- 
2.35.3



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] mm/hmm: fault non-owner device private entries
  2022-07-25 18:36 ` [PATCH v2 1/2] " Ralph Campbell
@ 2022-07-26  1:26   ` Alistair Popple
  2022-07-26 16:51     ` Ralph Campbell
  2022-07-26 20:59   ` John Hubbard
  1 sibling, 1 reply; 10+ messages in thread
From: Alistair Popple @ 2022-07-26  1:26 UTC (permalink / raw)
  To: Ralph Campbell
  Cc: linux-mm, Felix Kuehling, Philip Yang, Jason Gunthorpe,
	Andrew Morton, stable


Thanks Ralph, please add:

Reviewed-by: Alistair Popple <apopple@nvidia.com>

However I think the fixes tag is wrong, see below.

Ralph Campbell <rcampbell@nvidia.com> writes:

> If hmm_range_fault() is called with the HMM_PFN_REQ_FAULT flag and a
> device private PTE is found, the hmm_range::dev_private_owner page is
> used to determine if the device private page should not be faulted in.
> However, if the device private page is not owned by the caller,
> hmm_range_fault() returns an error instead of calling migrate_to_ram()
> to fault in the page.
>
> Cc: stable@vger.kernel.org
> Fixes: 76612d6ce4cc ("mm/hmm: reorganize how !pte_present is handled in hmm_vma_handle_pte()")

This should be 08ddddda667b ("mm/hmm: check the device private page owner in hmm_range_fault()")

> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> Reported-by: Felix Kuehling <felix.kuehling@amd.com>
> ---
>  mm/hmm.c | 19 ++++++++-----------
>  1 file changed, 8 insertions(+), 11 deletions(-)
>
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 3fd3242c5e50..f2aa63b94d9b 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -212,14 +212,6 @@ int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr,
>  		unsigned long end, unsigned long hmm_pfns[], pmd_t pmd);
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> -static inline bool hmm_is_device_private_entry(struct hmm_range *range,
> -		swp_entry_t entry)
> -{
> -	return is_device_private_entry(entry) &&
> -		pfn_swap_entry_to_page(entry)->pgmap->owner ==
> -		range->dev_private_owner;
> -}
> -
>  static inline unsigned long pte_to_hmm_pfn_flags(struct hmm_range *range,
>  						 pte_t pte)
>  {
> @@ -252,10 +244,12 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
>  		swp_entry_t entry = pte_to_swp_entry(pte);
>
>  		/*
> -		 * Never fault in device private pages, but just report
> -		 * the PFN even if not present.
> +		 * Don't fault in device private pages owned by the caller,
> +		 * just report the PFN.
>  		 */
> -		if (hmm_is_device_private_entry(range, entry)) {
> +		if (is_device_private_entry(entry) &&
> +		    pfn_swap_entry_to_page(entry)->pgmap->owner ==
> +		    range->dev_private_owner) {
>  			cpu_flags = HMM_PFN_VALID;
>  			if (is_writable_device_private_entry(entry))
>  				cpu_flags |= HMM_PFN_WRITE;
> @@ -273,6 +267,9 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
>  		if (!non_swap_entry(entry))
>  			goto fault;
>
> +		if (is_device_private_entry(entry))
> +			goto fault;
> +
>  		if (is_device_exclusive_entry(entry))
>  			goto fault;


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/2] mm/hmm: add a test for cross device private faults
  2022-07-25 18:36 ` [PATCH v2 2/2] mm/hmm: add a test for cross device private faults Ralph Campbell
@ 2022-07-26  1:38   ` Alistair Popple
  2022-07-26 21:03   ` John Hubbard
  1 sibling, 0 replies; 10+ messages in thread
From: Alistair Popple @ 2022-07-26  1:38 UTC (permalink / raw)
  To: Ralph Campbell
  Cc: linux-mm, Felix Kuehling, Philip Yang, Jason Gunthorpe, Andrew Morton


Reviewed-by: Alistair Popple <apopple@nvidia.com>

Ralph Campbell <rcampbell@nvidia.com> writes:

> Add a simple test case for when hmm_range_fault() is called with the
> HMM_PFN_REQ_FAULT flag and a device private PTE is found for a device
> other than the hmm_range::dev_private_owner. This should cause the
> page to be faulted back to system memory from the other device and the
> PFN returned in the output array.
>
> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> ---
>  tools/testing/selftests/vm/hmm-tests.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c
> index 203323967b50..a5ce7cc2e7aa 100644
> --- a/tools/testing/selftests/vm/hmm-tests.c
> +++ b/tools/testing/selftests/vm/hmm-tests.c
> @@ -1520,9 +1520,19 @@ TEST_F(hmm2, double_map)
>  	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
>  		ASSERT_EQ(ptr[i], i);
>
> -	/* Punch a hole after the first page address. */
> -	ret = munmap(buffer->ptr + self->page_size, self->page_size);
> +	/* Migrate pages to device 1 and try to read from device 0. */
> +	ret = hmm_dmirror_cmd(self->fd1, HMM_DMIRROR_MIGRATE, buffer, npages);
> +	ASSERT_EQ(ret, 0);
> +	ASSERT_EQ(buffer->cpages, npages);
> +
> +	ret = hmm_dmirror_cmd(self->fd0, HMM_DMIRROR_READ, buffer, npages);
>  	ASSERT_EQ(ret, 0);
> +	ASSERT_EQ(buffer->cpages, npages);
> +	ASSERT_EQ(buffer->faults, 1);
> +
> +	/* Check what device 0 read. */
> +	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
> +		ASSERT_EQ(ptr[i], i);
>
>  	hmm_buffer_free(buffer);
>  }


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] mm/hmm: fault non-owner device private entries
  2022-07-26  1:26   ` Alistair Popple
@ 2022-07-26 16:51     ` Ralph Campbell
  2022-07-26 19:06       ` Andrew Morton
  0 siblings, 1 reply; 10+ messages in thread
From: Ralph Campbell @ 2022-07-26 16:51 UTC (permalink / raw)
  To: Alistair Popple
  Cc: linux-mm, Felix Kuehling, Philip Yang, Jason Gunthorpe,
	Andrew Morton, stable


On 7/25/22 18:26, Alistair Popple wrote:
> Thanks Ralph, please add:
>
> Reviewed-by: Alistair Popple <apopple@nvidia.com>
>
> However I think the fixes tag is wrong, see below.
>
> Ralph Campbell <rcampbell@nvidia.com> writes:
>
>> If hmm_range_fault() is called with the HMM_PFN_REQ_FAULT flag and a
>> device private PTE is found, the hmm_range::dev_private_owner page is
>> used to determine if the device private page should not be faulted in.
>> However, if the device private page is not owned by the caller,
>> hmm_range_fault() returns an error instead of calling migrate_to_ram()
>> to fault in the page.
>>
>> Cc: stable@vger.kernel.org
>> Fixes: 76612d6ce4cc ("mm/hmm: reorganize how !pte_present is handled in hmm_vma_handle_pte()")
> This should be 08ddddda667b ("mm/hmm: check the device private page owner in hmm_range_fault()")

Looks better to me too.
I assume Andrew will update the tags.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] mm/hmm: fault non-owner device private entries
  2022-07-26 16:51     ` Ralph Campbell
@ 2022-07-26 19:06       ` Andrew Morton
  0 siblings, 0 replies; 10+ messages in thread
From: Andrew Morton @ 2022-07-26 19:06 UTC (permalink / raw)
  To: Ralph Campbell
  Cc: Alistair Popple, linux-mm, Felix Kuehling, Philip Yang,
	Jason Gunthorpe, stable

On Tue, 26 Jul 2022 09:51:24 -0700 Ralph Campbell <rcampbell@nvidia.com> wrote:

> >> Cc: stable@vger.kernel.org
> >> Fixes: 76612d6ce4cc ("mm/hmm: reorganize how !pte_present is handled in hmm_vma_handle_pte()")
> > This should be 08ddddda667b ("mm/hmm: check the device private page owner in hmm_range_fault()")
> 
> Looks better to me too.
> I assume Andrew will update the tags.

Yes, I updated the patch.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/2] mm/hmm: fault non-owner device private entries
  2022-07-25 18:36 ` [PATCH v2 1/2] " Ralph Campbell
  2022-07-26  1:26   ` Alistair Popple
@ 2022-07-26 20:59   ` John Hubbard
  1 sibling, 0 replies; 10+ messages in thread
From: John Hubbard @ 2022-07-26 20:59 UTC (permalink / raw)
  To: Ralph Campbell, linux-mm
  Cc: Felix Kuehling, Philip Yang, Alistair Popple, Jason Gunthorpe,
	Andrew Morton, stable

On 7/25/22 11:36, Ralph Campbell wrote:
> If hmm_range_fault() is called with the HMM_PFN_REQ_FAULT flag and a
> device private PTE is found, the hmm_range::dev_private_owner page is
> used to determine if the device private page should not be faulted in.
> However, if the device private page is not owned by the caller,
> hmm_range_fault() returns an error instead of calling migrate_to_ram()
> to fault in the page.

Hi Ralph,

Just for our future sanity when trying to read through the log,
it's best to describe the problem, and then describe the fix. The
text above does not makes it quite difficult to tell if it refers to
the pre-patch or post-patch code.

Also, a higher-level description of what this enables is good to have.

thanks,
-- 
John Hubbard
NVIDIA
> 
> Cc: stable@vger.kernel.org
> Fixes: 76612d6ce4cc ("mm/hmm: reorganize how !pte_present is handled in hmm_vma_handle_pte()")
> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> Reported-by: Felix Kuehling <felix.kuehling@amd.com>
> ---
>   mm/hmm.c | 19 ++++++++-----------
>   1 file changed, 8 insertions(+), 11 deletions(-)
> 
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 3fd3242c5e50..f2aa63b94d9b 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -212,14 +212,6 @@ int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr,
>   		unsigned long end, unsigned long hmm_pfns[], pmd_t pmd);
>   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>   
> -static inline bool hmm_is_device_private_entry(struct hmm_range *range,
> -		swp_entry_t entry)
> -{
> -	return is_device_private_entry(entry) &&
> -		pfn_swap_entry_to_page(entry)->pgmap->owner ==
> -		range->dev_private_owner;
> -}
> -
>   static inline unsigned long pte_to_hmm_pfn_flags(struct hmm_range *range,
>   						 pte_t pte)
>   {
> @@ -252,10 +244,12 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
>   		swp_entry_t entry = pte_to_swp_entry(pte);
>   
>   		/*
> -		 * Never fault in device private pages, but just report
> -		 * the PFN even if not present.
> +		 * Don't fault in device private pages owned by the caller,
> +		 * just report the PFN.
>   		 */
> -		if (hmm_is_device_private_entry(range, entry)) {
> +		if (is_device_private_entry(entry) &&
> +		    pfn_swap_entry_to_page(entry)->pgmap->owner ==
> +		    range->dev_private_owner) {
>   			cpu_flags = HMM_PFN_VALID;
>   			if (is_writable_device_private_entry(entry))
>   				cpu_flags |= HMM_PFN_WRITE;
> @@ -273,6 +267,9 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
>   		if (!non_swap_entry(entry))
>   			goto fault;
>   
> +		if (is_device_private_entry(entry))
> +			goto fault;
> +
>   		if (is_device_exclusive_entry(entry))
>   			goto fault;
>   



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/2] mm/hmm: add a test for cross device private faults
  2022-07-25 18:36 ` [PATCH v2 2/2] mm/hmm: add a test for cross device private faults Ralph Campbell
  2022-07-26  1:38   ` Alistair Popple
@ 2022-07-26 21:03   ` John Hubbard
  2022-07-26 21:14     ` Ralph Campbell
  1 sibling, 1 reply; 10+ messages in thread
From: John Hubbard @ 2022-07-26 21:03 UTC (permalink / raw)
  To: Ralph Campbell, linux-mm
  Cc: Felix Kuehling, Philip Yang, Alistair Popple, Jason Gunthorpe,
	Andrew Morton

On 7/25/22 11:36, Ralph Campbell wrote:
> Add a simple test case for when hmm_range_fault() is called with the
> HMM_PFN_REQ_FAULT flag and a device private PTE is found for a device
> other than the hmm_range::dev_private_owner. This should cause the
> page to be faulted back to system memory from the other device and the
> PFN returned in the output array.
> 
> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> ---
>   tools/testing/selftests/vm/hmm-tests.c | 14 ++++++++++++--
>   1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c
> index 203323967b50..a5ce7cc2e7aa 100644
> --- a/tools/testing/selftests/vm/hmm-tests.c
> +++ b/tools/testing/selftests/vm/hmm-tests.c
> @@ -1520,9 +1520,19 @@ TEST_F(hmm2, double_map)
>   	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
>   		ASSERT_EQ(ptr[i], i);
>   
> -	/* Punch a hole after the first page address. */
> -	ret = munmap(buffer->ptr + self->page_size, self->page_size);

If this removal was intentional, then it should be mentioned in the
commit log.

> +	/* Migrate pages to device 1 and try to read from device 0. */
> +	ret = hmm_dmirror_cmd(self->fd1, HMM_DMIRROR_MIGRATE, buffer, npages);
> +	ASSERT_EQ(ret, 0);
> +	ASSERT_EQ(buffer->cpages, npages);
> +
> +	ret = hmm_dmirror_cmd(self->fd0, HMM_DMIRROR_READ, buffer, npages);
>   	ASSERT_EQ(ret, 0);
> +	ASSERT_EQ(buffer->cpages, npages);
> +	ASSERT_EQ(buffer->faults, 1);
> +
> +	/* Check what device 0 read. */
> +	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
> +		ASSERT_EQ(ptr[i], i);

I'm assuming that your testing shows that this fails without patch 1,
and succeeds with patch 1 applied? Apologies for such an obvious
question... :)

>   
>   	hmm_buffer_free(buffer);
>   }

thanks,
-- 
John Hubbard
NVIDIA


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/2] mm/hmm: add a test for cross device private faults
  2022-07-26 21:03   ` John Hubbard
@ 2022-07-26 21:14     ` Ralph Campbell
  0 siblings, 0 replies; 10+ messages in thread
From: Ralph Campbell @ 2022-07-26 21:14 UTC (permalink / raw)
  To: John Hubbard, linux-mm
  Cc: Felix Kuehling, Philip Yang, Alistair Popple, Jason Gunthorpe,
	Andrew Morton


On 7/26/22 14:03, John Hubbard wrote:
> On 7/25/22 11:36, Ralph Campbell wrote:
>> Add a simple test case for when hmm_range_fault() is called with the
>> HMM_PFN_REQ_FAULT flag and a device private PTE is found for a device
>> other than the hmm_range::dev_private_owner. This should cause the
>> page to be faulted back to system memory from the other device and the
>> PFN returned in the output array.
>>
>> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
>> ---
>>   tools/testing/selftests/vm/hmm-tests.c | 14 ++++++++++++--
>>   1 file changed, 12 insertions(+), 2 deletions(-)
>>
>> diff --git a/tools/testing/selftests/vm/hmm-tests.c 
>> b/tools/testing/selftests/vm/hmm-tests.c
>> index 203323967b50..a5ce7cc2e7aa 100644
>> --- a/tools/testing/selftests/vm/hmm-tests.c
>> +++ b/tools/testing/selftests/vm/hmm-tests.c
>> @@ -1520,9 +1520,19 @@ TEST_F(hmm2, double_map)
>>       for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
>>           ASSERT_EQ(ptr[i], i);
>>   -    /* Punch a hole after the first page address. */
>> -    ret = munmap(buffer->ptr + self->page_size, self->page_size);
>
> If this removal was intentional, then it should be mentioned in the
> commit log.

Yes. It does nothing, probably a copy & paste error.
I'll update the description and send a v3.

>
>> +    /* Migrate pages to device 1 and try to read from device 0. */
>> +    ret = hmm_dmirror_cmd(self->fd1, HMM_DMIRROR_MIGRATE, buffer, 
>> npages);
>> +    ASSERT_EQ(ret, 0);
>> +    ASSERT_EQ(buffer->cpages, npages);
>> +
>> +    ret = hmm_dmirror_cmd(self->fd0, HMM_DMIRROR_READ, buffer, npages);
>>       ASSERT_EQ(ret, 0);
>> +    ASSERT_EQ(buffer->cpages, npages);
>> +    ASSERT_EQ(buffer->faults, 1);
>> +
>> +    /* Check what device 0 read. */
>> +    for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
>> +        ASSERT_EQ(ptr[i], i);
>
> I'm assuming that your testing shows that this fails without patch 1,
> and succeeds with patch 1 applied? Apologies for such an obvious
> question... :)

Yes. Without the patch, the ASSERT_EQ(ret, 0) would trigger.
With the patch, ASSERT_EQ(buffer->faults, 1) verifies that the pages
were faulted in from device 1 when device 0 tries to read them.



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-07-26 21:14 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-25 18:36 [PATCH v2 0/2] mm/hmm: fault non-owner device private entries Ralph Campbell
2022-07-25 18:36 ` [PATCH v2 1/2] " Ralph Campbell
2022-07-26  1:26   ` Alistair Popple
2022-07-26 16:51     ` Ralph Campbell
2022-07-26 19:06       ` Andrew Morton
2022-07-26 20:59   ` John Hubbard
2022-07-25 18:36 ` [PATCH v2 2/2] mm/hmm: add a test for cross device private faults Ralph Campbell
2022-07-26  1:38   ` Alistair Popple
2022-07-26 21:03   ` John Hubbard
2022-07-26 21:14     ` Ralph Campbell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox