* [PATCH v1] mm/rmap: cleanup partially-mapped handling in __folio_remove_rmap()
@ 2024-06-18 15:10 David Hildenbrand
2024-06-21 1:49 ` Zi Yan
2024-06-24 11:48 ` Yosry Ahmed
0 siblings, 2 replies; 4+ messages in thread
From: David Hildenbrand @ 2024-06-18 15:10 UTC (permalink / raw)
To: linux-kernel; +Cc: linux-mm, David Hildenbrand, Yosry Ahmed
Let's simplify and reduce code indentation. In the RMAP_LEVEL_PTE case, we
already check for nr when computing partially_mapped.
For RMAP_LEVEL_PMD, it's a bit more confusing. Likely, we don't need the
"nr" check, but we could have "nr < nr_pmdmapped" also if we stumbled
into the "/* Raced ahead of another remove and an add? */" case. So
let's simply move the nr check in there.
Note that partially_mapped is always false for small folios.
No functional change intended.
Cc: Yosry Ahmed <yosryahmed@google.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/rmap.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index afec4b6800caf..aa900e46cdf82 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1551,11 +1551,12 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
}
}
- partially_mapped = nr < nr_pmdmapped;
+ partially_mapped = nr && nr < nr_pmdmapped;
break;
}
- if (nr) {
+ if (partially_mapped && folio_test_anon(folio) &&
+ list_empty(&folio->_deferred_list))
/*
* Queue anon large folio for deferred split if at least one
* page of the folio is unmapped and at least one page
@@ -1563,10 +1564,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
*
* Check partially_mapped first to ensure it is a large folio.
*/
- if (folio_test_anon(folio) && partially_mapped &&
- list_empty(&folio->_deferred_list))
- deferred_split_folio(folio);
- }
+ deferred_split_folio(folio);
__folio_mod_stat(folio, -nr, -nr_pmdmapped);
/*
--
2.45.2
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v1] mm/rmap: cleanup partially-mapped handling in __folio_remove_rmap()
2024-06-18 15:10 [PATCH v1] mm/rmap: cleanup partially-mapped handling in __folio_remove_rmap() David Hildenbrand
@ 2024-06-21 1:49 ` Zi Yan
2024-06-24 11:48 ` Yosry Ahmed
1 sibling, 0 replies; 4+ messages in thread
From: Zi Yan @ 2024-06-21 1:49 UTC (permalink / raw)
To: David Hildenbrand; +Cc: linux-kernel, linux-mm, Yosry Ahmed
[-- Attachment #1: Type: text/plain, Size: 801 bytes --]
On 18 Jun 2024, at 11:10, David Hildenbrand wrote:
> Let's simplify and reduce code indentation. In the RMAP_LEVEL_PTE case, we
> already check for nr when computing partially_mapped.
>
> For RMAP_LEVEL_PMD, it's a bit more confusing. Likely, we don't need the
> "nr" check, but we could have "nr < nr_pmdmapped" also if we stumbled
> into the "/* Raced ahead of another remove and an add? */" case. So
> let's simply move the nr check in there.
>
> Note that partially_mapped is always false for small folios.
>
> No functional change intended.
>
> Cc: Yosry Ahmed <yosryahmed@google.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> mm/rmap.c | 10 ++++------
> 1 file changed, 4 insertions(+), 6 deletions(-)
LGTM. Reviewed-by: Zi Yan <ziy@nvidia.com>
--
Best Regards,
Yan, Zi
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 854 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v1] mm/rmap: cleanup partially-mapped handling in __folio_remove_rmap()
2024-06-18 15:10 [PATCH v1] mm/rmap: cleanup partially-mapped handling in __folio_remove_rmap() David Hildenbrand
2024-06-21 1:49 ` Zi Yan
@ 2024-06-24 11:48 ` Yosry Ahmed
2024-07-10 21:40 ` David Hildenbrand
1 sibling, 1 reply; 4+ messages in thread
From: Yosry Ahmed @ 2024-06-24 11:48 UTC (permalink / raw)
To: David Hildenbrand; +Cc: linux-kernel, linux-mm
On Tue, Jun 18, 2024 at 8:10 AM David Hildenbrand <david@redhat.com> wrote:
>
> Let's simplify and reduce code indentation. In the RMAP_LEVEL_PTE case, we
> already check for nr when computing partially_mapped.
>
> For RMAP_LEVEL_PMD, it's a bit more confusing. Likely, we don't need the
> "nr" check, but we could have "nr < nr_pmdmapped" also if we stumbled
> into the "/* Raced ahead of another remove and an add? */" case. So
> let's simply move the nr check in there.
>
> Note that partially_mapped is always false for small folios.
>
> No functional change intended.
>
> Cc: Yosry Ahmed <yosryahmed@google.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
Makes sense to me. FWIW:
Reviewed-by: Yosry Ahmed <yosryahmed@google.com>
> ---
> mm/rmap.c | 10 ++++------
> 1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index afec4b6800caf..aa900e46cdf82 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1551,11 +1551,12 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
> }
> }
>
> - partially_mapped = nr < nr_pmdmapped;
> + partially_mapped = nr && nr < nr_pmdmapped;
> break;
> }
>
> - if (nr) {
> + if (partially_mapped && folio_test_anon(folio) &&
> + list_empty(&folio->_deferred_list))
> /*
> * Queue anon large folio for deferred split if at least one
> * page of the folio is unmapped and at least one page
> @@ -1563,10 +1564,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
> *
> * Check partially_mapped first to ensure it is a large folio.
> */
> - if (folio_test_anon(folio) && partially_mapped &&
> - list_empty(&folio->_deferred_list))
> - deferred_split_folio(folio);
> - }
> + deferred_split_folio(folio);
FWIW, I prefer moving the comment out of the one-line if block as you
originally suggested in [1].
[1]https://lore.kernel.org/lkml/1a408ed1-7e81-457e-a205-db274b4d6b78@redhat.com/
> __folio_mod_stat(folio, -nr, -nr_pmdmapped);
>
> /*
> --
> 2.45.2
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v1] mm/rmap: cleanup partially-mapped handling in __folio_remove_rmap()
2024-06-24 11:48 ` Yosry Ahmed
@ 2024-07-10 21:40 ` David Hildenbrand
0 siblings, 0 replies; 4+ messages in thread
From: David Hildenbrand @ 2024-07-10 21:40 UTC (permalink / raw)
To: Yosry Ahmed; +Cc: linux-kernel, linux-mm
On 24.06.24 13:48, Yosry Ahmed wrote:
> On Tue, Jun 18, 2024 at 8:10 AM David Hildenbrand <david@redhat.com> wrote:
>>
>> Let's simplify and reduce code indentation. In the RMAP_LEVEL_PTE case, we
>> already check for nr when computing partially_mapped.
>>
>> For RMAP_LEVEL_PMD, it's a bit more confusing. Likely, we don't need the
>> "nr" check, but we could have "nr < nr_pmdmapped" also if we stumbled
>> into the "/* Raced ahead of another remove and an add? */" case. So
>> let's simply move the nr check in there.
>>
>> Note that partially_mapped is always false for small folios.
>>
>> No functional change intended.
>>
>> Cc: Yosry Ahmed <yosryahmed@google.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>
> Makes sense to me. FWIW:
>
> Reviewed-by: Yosry Ahmed <yosryahmed@google.com>
>
>> ---
>> mm/rmap.c | 10 ++++------
>> 1 file changed, 4 insertions(+), 6 deletions(-)
>>
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index afec4b6800caf..aa900e46cdf82 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -1551,11 +1551,12 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
>> }
>> }
>>
>> - partially_mapped = nr < nr_pmdmapped;
>> + partially_mapped = nr && nr < nr_pmdmapped;
>> break;
>> }
>>
>> - if (nr) {
>> + if (partially_mapped && folio_test_anon(folio) &&
>> + list_empty(&folio->_deferred_list))
>> /*
>> * Queue anon large folio for deferred split if at least one
>> * page of the folio is unmapped and at least one page
>> @@ -1563,10 +1564,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
>> *
>> * Check partially_mapped first to ensure it is a large folio.
>> */
>> - if (folio_test_anon(folio) && partially_mapped &&
>> - list_empty(&folio->_deferred_list))
>> - deferred_split_folio(folio);
>> - }
>> + deferred_split_folio(folio);
>
> FWIW, I prefer moving the comment out of the one-line if block as you
> originally suggested in [1].
>
> [1]https://lore.kernel.org/lkml/1a408ed1-7e81-457e-a205-db274b4d6b78@redhat.com/
Sorry for the late reply, crazy times here.
Let me do that and resend, thanks!
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2024-07-10 21:40 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-06-18 15:10 [PATCH v1] mm/rmap: cleanup partially-mapped handling in __folio_remove_rmap() David Hildenbrand
2024-06-21 1:49 ` Zi Yan
2024-06-24 11:48 ` Yosry Ahmed
2024-07-10 21:40 ` David Hildenbrand
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox