* Re: [PATCH 1/2] mm: remove redundant check in page_vma_mapped_walk
[not found] ` <20230704213932.1339204-2-shikemeng@huaweicloud.com>
@ 2023-07-04 17:05 ` Andrew Morton
2023-07-06 2:37 ` Kemeng Shi
0 siblings, 1 reply; 2+ messages in thread
From: Andrew Morton @ 2023-07-04 17:05 UTC (permalink / raw)
To: Kemeng Shi; +Cc: linux-mm, linux-kernel
On Wed, 5 Jul 2023 05:39:31 +0800 Kemeng Shi <shikemeng@huaweicloud.com> wrote:
> For PVMW_SYNC case, we always take pte lock when get first pte of
> PTE-mapped THP in map_pte and hold it until:
> 1. scan of pmd range finished or
> 2. scan of user input range finished or
> 3. user stop walk with page_vma_mapped_walk_done.
> In each case. pte lock will not be freed during middle scan of PTE-mapped
> THP.
>
> ...
>
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -275,10 +275,6 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> goto restart;
> }
> pvmw->pte++;
> - if ((pvmw->flags & PVMW_SYNC) && !pvmw->ptl) {
> - pvmw->ptl = pte_lockptr(mm, pvmw->pmd);
> - spin_lock(pvmw->ptl);
> - }
> } while (pte_none(*pvmw->pte));
>
> if (!pvmw->ptl) {
This code has changed significantly since 6.4. Please develop against
the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm, thanks.
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH 1/2] mm: remove redundant check in page_vma_mapped_walk
2023-07-04 17:05 ` [PATCH 1/2] mm: remove redundant check in page_vma_mapped_walk Andrew Morton
@ 2023-07-06 2:37 ` Kemeng Shi
0 siblings, 0 replies; 2+ messages in thread
From: Kemeng Shi @ 2023-07-06 2:37 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-mm, linux-kernel
on 7/5/2023 1:05 AM, Andrew Morton wrote:
> On Wed, 5 Jul 2023 05:39:31 +0800 Kemeng Shi <shikemeng@huaweicloud.com> wrote:
>
>> For PVMW_SYNC case, we always take pte lock when get first pte of
>> PTE-mapped THP in map_pte and hold it until:
>> 1. scan of pmd range finished or
>> 2. scan of user input range finished or
>> 3. user stop walk with page_vma_mapped_walk_done.
>> In each case. pte lock will not be freed during middle scan of PTE-mapped
>> THP.
>>
>> ...
>>
>> --- a/mm/page_vma_mapped.c
>> +++ b/mm/page_vma_mapped.c
>> @@ -275,10 +275,6 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>> goto restart;
>> }
>> pvmw->pte++;
>> - if ((pvmw->flags & PVMW_SYNC) && !pvmw->ptl) {
>> - pvmw->ptl = pte_lockptr(mm, pvmw->pmd);
>> - spin_lock(pvmw->ptl);
>> - }
>> } while (pte_none(*pvmw->pte));
>>
>> if (!pvmw->ptl) {
>
> This code has changed significantly since 6.4. Please develop against
> the mm-unstable branch at
> git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm, thanks.
>
>
Thanks for reminding me of this, I will check my changes in updated code.
--
Best wishes
Kemeng Shi
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2023-07-06 2:37 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20230704213932.1339204-1-shikemeng@huaweicloud.com>
[not found] ` <20230704213932.1339204-2-shikemeng@huaweicloud.com>
2023-07-04 17:05 ` [PATCH 1/2] mm: remove redundant check in page_vma_mapped_walk Andrew Morton
2023-07-06 2:37 ` Kemeng Shi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox