* Re: [PATCH] arm64, mm: avoid always making PTE dirty in pte_mkwrite() [not found] <20251015023712.46598-1-ying.huang@linux.alibaba.com> @ 2025-10-17 18:06 ` Catalin Marinas 2025-10-20 2:09 ` Anshuman Khandual 2025-10-20 11:00 ` Huang, Ying 0 siblings, 2 replies; 5+ messages in thread From: Catalin Marinas @ 2025-10-17 18:06 UTC (permalink / raw) To: Huang Ying Cc: Will Deacon, Anshuman Khandual, Ryan Roberts, Gavin Shan, Ard Biesheuvel, Matthew Wilcox (Oracle), Yicong Yang, linux-arm-kernel, linux-kernel, linux-mm On Wed, Oct 15, 2025 at 10:37:12AM +0800, Huang Ying wrote: > Current pte_mkwrite_novma() makes PTE dirty unconditionally. This may > mark some pages that are never written dirty wrongly. For example, > do_swap_page() may map the exclusive pages with writable and clean PTEs > if the VMA is writable and the page fault is for read access. > However, current pte_mkwrite_novma() implementation always dirties the > PTE. This may cause unnecessary disk writing if the pages are > never written before being reclaimed. > > So, change pte_mkwrite_novma() to clear the PTE_RDONLY bit only if the > PTE_DIRTY bit is set to make it possible to make the PTE writable and > clean. > > The current behavior was introduced in commit 73e86cb03cf2 ("arm64: > Move PTE_RDONLY bit handling out of set_pte_at()"). Before that, > pte_mkwrite() only sets the PTE_WRITE bit, while set_pte_at() only > clears the PTE_RDONLY bit if both the PTE_WRITE and the PTE_DIRTY bits > are set. > > To test the performance impact of the patch, on an arm64 server > machine, run 16 redis-server processes on socket 1 and 16 > memtier_benchmark processes on socket 0 with mostly get > transactions (that is, redis-server will mostly read memory only). > The memory footprint of redis-server is larger than the available > memory, so swap out/in will be triggered. Test results show that the > patch can avoid most swapping out because the pages are mostly clean. > And the benchmark throughput improves ~23.9% in the test. > > Fixes: 73e86cb03cf2 ("arm64: Move PTE_RDONLY bit handling out of set_pte_at()") > Signed-off-by: Huang Ying <ying.huang@linux.alibaba.com> > Cc: Catalin Marinas <catalin.marinas@arm.com> > Cc: Will Deacon <will@kernel.org> > Cc: Anshuman Khandual <anshuman.khandual@arm.com> > Cc: Ryan Roberts <ryan.roberts@arm.com> > Cc: Gavin Shan <gshan@redhat.com> > Cc: Ard Biesheuvel <ardb@kernel.org> > Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> > Cc: Yicong Yang <yangyicong@hisilicon.com> > Cc: linux-arm-kernel@lists.infradead.org > Cc: linux-kernel@vger.kernel.org > --- > arch/arm64/include/asm/pgtable.h | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index aa89c2e67ebc..0944e296dd4a 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -293,7 +293,8 @@ static inline pmd_t set_pmd_bit(pmd_t pmd, pgprot_t prot) > static inline pte_t pte_mkwrite_novma(pte_t pte) > { > pte = set_pte_bit(pte, __pgprot(PTE_WRITE)); > - pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY)); > + if (pte_sw_dirty(pte)) > + pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY)); > return pte; > } This seems to be the right thing. I recall years ago I grep'ed (obviously not hard enough) and most pte_mkwrite() places had a pte_mkdirty(). But I missed do_swap_page() and possibly others. For this patch: Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> I wonder whether we should also add (as a separate patch): diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index 830107b6dd08..df1c552ef11c 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -101,6 +101,7 @@ static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx) WARN_ON(pte_dirty(pte_mkclean(pte_mkdirty(pte)))); WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte, args->vma)))); WARN_ON(pte_dirty(pte_wrprotect(pte_mkclean(pte)))); + WARN_ON(pte_dirty(pte_mkwrite_novma(pte_mkclean(pte)))); WARN_ON(!pte_dirty(pte_wrprotect(pte_mkdirty(pte)))); } For completeness, also (and maybe other combinations): WARN_ON(!pte_write(pte_mkdirty(pte_mkwrite_novma(pte)))); I cc'ed linux-mm in case we missed anything. If nothing raised, I'll queue it next week. Thanks. -- Catalin ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] arm64, mm: avoid always making PTE dirty in pte_mkwrite() 2025-10-17 18:06 ` [PATCH] arm64, mm: avoid always making PTE dirty in pte_mkwrite() Catalin Marinas @ 2025-10-20 2:09 ` Anshuman Khandual 2025-10-20 11:04 ` Huang, Ying 2025-10-20 19:17 ` David Hildenbrand 2025-10-20 11:00 ` Huang, Ying 1 sibling, 2 replies; 5+ messages in thread From: Anshuman Khandual @ 2025-10-20 2:09 UTC (permalink / raw) To: Catalin Marinas, Huang Ying Cc: Will Deacon, Ryan Roberts, Gavin Shan, Ard Biesheuvel, Matthew Wilcox (Oracle), Yicong Yang, linux-arm-kernel, linux-kernel, linux-mm On 17/10/25 11:36 PM, Catalin Marinas wrote: > On Wed, Oct 15, 2025 at 10:37:12AM +0800, Huang Ying wrote: >> Current pte_mkwrite_novma() makes PTE dirty unconditionally. This may >> mark some pages that are never written dirty wrongly. For example, >> do_swap_page() may map the exclusive pages with writable and clean PTEs >> if the VMA is writable and the page fault is for read access. >> However, current pte_mkwrite_novma() implementation always dirties the >> PTE. This may cause unnecessary disk writing if the pages are >> never written before being reclaimed. >> >> So, change pte_mkwrite_novma() to clear the PTE_RDONLY bit only if the >> PTE_DIRTY bit is set to make it possible to make the PTE writable and >> clean. >> >> The current behavior was introduced in commit 73e86cb03cf2 ("arm64: >> Move PTE_RDONLY bit handling out of set_pte_at()"). Before that, >> pte_mkwrite() only sets the PTE_WRITE bit, while set_pte_at() only >> clears the PTE_RDONLY bit if both the PTE_WRITE and the PTE_DIRTY bits >> are set. >> >> To test the performance impact of the patch, on an arm64 server >> machine, run 16 redis-server processes on socket 1 and 16 >> memtier_benchmark processes on socket 0 with mostly get >> transactions (that is, redis-server will mostly read memory only). >> The memory footprint of redis-server is larger than the available >> memory, so swap out/in will be triggered. Test results show that the >> patch can avoid most swapping out because the pages are mostly clean. >> And the benchmark throughput improves ~23.9% in the test. >> >> Fixes: 73e86cb03cf2 ("arm64: Move PTE_RDONLY bit handling out of set_pte_at()") >> Signed-off-by: Huang Ying <ying.huang@linux.alibaba.com> >> Cc: Catalin Marinas <catalin.marinas@arm.com> >> Cc: Will Deacon <will@kernel.org> >> Cc: Anshuman Khandual <anshuman.khandual@arm.com> >> Cc: Ryan Roberts <ryan.roberts@arm.com> >> Cc: Gavin Shan <gshan@redhat.com> >> Cc: Ard Biesheuvel <ardb@kernel.org> >> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> >> Cc: Yicong Yang <yangyicong@hisilicon.com> >> Cc: linux-arm-kernel@lists.infradead.org >> Cc: linux-kernel@vger.kernel.org >> --- >> arch/arm64/include/asm/pgtable.h | 3 ++- >> 1 file changed, 2 insertions(+), 1 deletion(-) >> >> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h >> index aa89c2e67ebc..0944e296dd4a 100644 >> --- a/arch/arm64/include/asm/pgtable.h >> +++ b/arch/arm64/include/asm/pgtable.h >> @@ -293,7 +293,8 @@ static inline pmd_t set_pmd_bit(pmd_t pmd, pgprot_t prot) >> static inline pte_t pte_mkwrite_novma(pte_t pte) >> { >> pte = set_pte_bit(pte, __pgprot(PTE_WRITE)); >> - pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY)); >> + if (pte_sw_dirty(pte)) >> + pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY)); >> return pte; >> } > > This seems to be the right thing. I recall years ago I grep'ed > (obviously not hard enough) and most pte_mkwrite() places had a > pte_mkdirty(). But I missed do_swap_page() and possibly others. > > For this patch: > > Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> > > I wonder whether we should also add (as a separate patch): > > diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c > index 830107b6dd08..df1c552ef11c 100644 > --- a/mm/debug_vm_pgtable.c > +++ b/mm/debug_vm_pgtable.c > @@ -101,6 +101,7 @@ static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx) > WARN_ON(pte_dirty(pte_mkclean(pte_mkdirty(pte)))); > WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte, args->vma)))); > WARN_ON(pte_dirty(pte_wrprotect(pte_mkclean(pte)))); > + WARN_ON(pte_dirty(pte_mkwrite_novma(pte_mkclean(pte)))); > WARN_ON(!pte_dirty(pte_wrprotect(pte_mkdirty(pte)))); > } > > For completeness, also (and maybe other combinations): > > WARN_ON(!pte_write(pte_mkdirty(pte_mkwrite_novma(pte)))); Adding similar tests to pte_wrprotect(). diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index 830107b6dd08..573632ebf304 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -102,6 +102,11 @@ static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx) WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte, args->vma)))); WARN_ON(pte_dirty(pte_wrprotect(pte_mkclean(pte)))); WARN_ON(!pte_dirty(pte_wrprotect(pte_mkdirty(pte)))); + + WARN_ON(pte_dirty(pte_mkwrite_novma(pte_mkclean(pte)))); + WARN_ON(!pte_write(pte_mkdirty(pte_mkwrite_novma(pte)))); + WARN_ON(!pte_write(pte_mkwrite_novma(pte_wrprotect(pte)))); + WARN_ON(pte_write(pte_wrprotect(pte_mkwrite_novma(pte)))); } static void __init pte_advanced_tests(struct pgtable_debug_args *args) @@ -195,6 +200,9 @@ static void __init pmd_basic_tests(struct pgtable_debug_args *args, int idx) WARN_ON(pmd_write(pmd_wrprotect(pmd_mkwrite(pmd, args->vma)))); WARN_ON(pmd_dirty(pmd_wrprotect(pmd_mkclean(pmd)))); WARN_ON(!pmd_dirty(pmd_wrprotect(pmd_mkdirty(pmd)))); + + WARN_ON(!pmd_write(pmd_mkwrite_novma(pmd_wrprotect(pmd)))); + WARN_ON(pmd_write(pmd_wrprotect(pmd_mkwrite_novma(pmd)))); /* * A huge page does not point to next level page table * entry. Hence this must qualify as pmd_bad(). > > I cc'ed linux-mm in case we missed anything. If nothing raised, I'll > queue it next week. > > Thanks. > ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] arm64, mm: avoid always making PTE dirty in pte_mkwrite() 2025-10-20 2:09 ` Anshuman Khandual @ 2025-10-20 11:04 ` Huang, Ying 2025-10-20 19:17 ` David Hildenbrand 1 sibling, 0 replies; 5+ messages in thread From: Huang, Ying @ 2025-10-20 11:04 UTC (permalink / raw) To: Anshuman Khandual Cc: Catalin Marinas, Will Deacon, Ryan Roberts, Gavin Shan, Ard Biesheuvel, Matthew Wilcox (Oracle), Yicong Yang, linux-arm-kernel, linux-kernel, linux-mm Hi, Anshuman, Anshuman Khandual <anshuman.khandual@arm.com> writes: > On 17/10/25 11:36 PM, Catalin Marinas wrote: >> On Wed, Oct 15, 2025 at 10:37:12AM +0800, Huang Ying wrote: >>> Current pte_mkwrite_novma() makes PTE dirty unconditionally. This may >>> mark some pages that are never written dirty wrongly. For example, >>> do_swap_page() may map the exclusive pages with writable and clean PTEs >>> if the VMA is writable and the page fault is for read access. >>> However, current pte_mkwrite_novma() implementation always dirties the >>> PTE. This may cause unnecessary disk writing if the pages are >>> never written before being reclaimed. >>> >>> So, change pte_mkwrite_novma() to clear the PTE_RDONLY bit only if the >>> PTE_DIRTY bit is set to make it possible to make the PTE writable and >>> clean. >>> >>> The current behavior was introduced in commit 73e86cb03cf2 ("arm64: >>> Move PTE_RDONLY bit handling out of set_pte_at()"). Before that, >>> pte_mkwrite() only sets the PTE_WRITE bit, while set_pte_at() only >>> clears the PTE_RDONLY bit if both the PTE_WRITE and the PTE_DIRTY bits >>> are set. >>> >>> To test the performance impact of the patch, on an arm64 server >>> machine, run 16 redis-server processes on socket 1 and 16 >>> memtier_benchmark processes on socket 0 with mostly get >>> transactions (that is, redis-server will mostly read memory only). >>> The memory footprint of redis-server is larger than the available >>> memory, so swap out/in will be triggered. Test results show that the >>> patch can avoid most swapping out because the pages are mostly clean. >>> And the benchmark throughput improves ~23.9% in the test. >>> >>> Fixes: 73e86cb03cf2 ("arm64: Move PTE_RDONLY bit handling out of set_pte_at()") >>> Signed-off-by: Huang Ying <ying.huang@linux.alibaba.com> >>> Cc: Catalin Marinas <catalin.marinas@arm.com> >>> Cc: Will Deacon <will@kernel.org> >>> Cc: Anshuman Khandual <anshuman.khandual@arm.com> >>> Cc: Ryan Roberts <ryan.roberts@arm.com> >>> Cc: Gavin Shan <gshan@redhat.com> >>> Cc: Ard Biesheuvel <ardb@kernel.org> >>> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> >>> Cc: Yicong Yang <yangyicong@hisilicon.com> >>> Cc: linux-arm-kernel@lists.infradead.org >>> Cc: linux-kernel@vger.kernel.org >>> --- >>> arch/arm64/include/asm/pgtable.h | 3 ++- >>> 1 file changed, 2 insertions(+), 1 deletion(-) >>> >>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h >>> index aa89c2e67ebc..0944e296dd4a 100644 >>> --- a/arch/arm64/include/asm/pgtable.h >>> +++ b/arch/arm64/include/asm/pgtable.h >>> @@ -293,7 +293,8 @@ static inline pmd_t set_pmd_bit(pmd_t pmd, pgprot_t prot) >>> static inline pte_t pte_mkwrite_novma(pte_t pte) >>> { >>> pte = set_pte_bit(pte, __pgprot(PTE_WRITE)); >>> - pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY)); >>> + if (pte_sw_dirty(pte)) >>> + pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY)); >>> return pte; >>> } >> >> This seems to be the right thing. I recall years ago I grep'ed >> (obviously not hard enough) and most pte_mkwrite() places had a >> pte_mkdirty(). But I missed do_swap_page() and possibly others. >> >> For this patch: >> >> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> >> >> I wonder whether we should also add (as a separate patch): >> >> diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c >> index 830107b6dd08..df1c552ef11c 100644 >> --- a/mm/debug_vm_pgtable.c >> +++ b/mm/debug_vm_pgtable.c >> @@ -101,6 +101,7 @@ static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx) >> WARN_ON(pte_dirty(pte_mkclean(pte_mkdirty(pte)))); >> WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte, args->vma)))); >> WARN_ON(pte_dirty(pte_wrprotect(pte_mkclean(pte)))); >> + WARN_ON(pte_dirty(pte_mkwrite_novma(pte_mkclean(pte)))); >> WARN_ON(!pte_dirty(pte_wrprotect(pte_mkdirty(pte)))); >> } >> >> For completeness, also (and maybe other combinations): >> >> WARN_ON(!pte_write(pte_mkdirty(pte_mkwrite_novma(pte)))); > > Adding similar tests to pte_wrprotect(). > > diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c > index 830107b6dd08..573632ebf304 100644 > --- a/mm/debug_vm_pgtable.c > +++ b/mm/debug_vm_pgtable.c > @@ -102,6 +102,11 @@ static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx) > WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte, args->vma)))); > WARN_ON(pte_dirty(pte_wrprotect(pte_mkclean(pte)))); > WARN_ON(!pte_dirty(pte_wrprotect(pte_mkdirty(pte)))); > + > + WARN_ON(pte_dirty(pte_mkwrite_novma(pte_mkclean(pte)))); > + WARN_ON(!pte_write(pte_mkdirty(pte_mkwrite_novma(pte)))); > + WARN_ON(!pte_write(pte_mkwrite_novma(pte_wrprotect(pte)))); > + WARN_ON(pte_write(pte_wrprotect(pte_mkwrite_novma(pte)))); > } > > static void __init pte_advanced_tests(struct pgtable_debug_args *args) > @@ -195,6 +200,9 @@ static void __init pmd_basic_tests(struct pgtable_debug_args *args, int idx) > WARN_ON(pmd_write(pmd_wrprotect(pmd_mkwrite(pmd, args->vma)))); > WARN_ON(pmd_dirty(pmd_wrprotect(pmd_mkclean(pmd)))); > WARN_ON(!pmd_dirty(pmd_wrprotect(pmd_mkdirty(pmd)))); > + > + WARN_ON(!pmd_write(pmd_mkwrite_novma(pmd_wrprotect(pmd)))); > + WARN_ON(pmd_write(pmd_wrprotect(pmd_mkwrite_novma(pmd)))); > /* > * A huge page does not point to next level page table > * entry. Hence this must qualify as pmd_bad(). Thanks! I can add a patch for these tests. Or, do you want to work on it? >> >> I cc'ed linux-mm in case we missed anything. If nothing raised, I'll >> queue it next week. --- Best Regards, Huang, Ying ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] arm64, mm: avoid always making PTE dirty in pte_mkwrite() 2025-10-20 2:09 ` Anshuman Khandual 2025-10-20 11:04 ` Huang, Ying @ 2025-10-20 19:17 ` David Hildenbrand 1 sibling, 0 replies; 5+ messages in thread From: David Hildenbrand @ 2025-10-20 19:17 UTC (permalink / raw) To: Anshuman Khandual, Catalin Marinas, Huang Ying Cc: Will Deacon, Ryan Roberts, Gavin Shan, Ard Biesheuvel, Matthew Wilcox (Oracle), Yicong Yang, linux-arm-kernel, linux-kernel, linux-mm On 20.10.25 04:09, Anshuman Khandual wrote: > > > On 17/10/25 11:36 PM, Catalin Marinas wrote: >> On Wed, Oct 15, 2025 at 10:37:12AM +0800, Huang Ying wrote: >>> Current pte_mkwrite_novma() makes PTE dirty unconditionally. This may >>> mark some pages that are never written dirty wrongly. For example, >>> do_swap_page() may map the exclusive pages with writable and clean PTEs >>> if the VMA is writable and the page fault is for read access. >>> However, current pte_mkwrite_novma() implementation always dirties the >>> PTE. This may cause unnecessary disk writing if the pages are >>> never written before being reclaimed. >>> >>> So, change pte_mkwrite_novma() to clear the PTE_RDONLY bit only if the >>> PTE_DIRTY bit is set to make it possible to make the PTE writable and >>> clean. >>> >>> The current behavior was introduced in commit 73e86cb03cf2 ("arm64: >>> Move PTE_RDONLY bit handling out of set_pte_at()"). Before that, >>> pte_mkwrite() only sets the PTE_WRITE bit, while set_pte_at() only >>> clears the PTE_RDONLY bit if both the PTE_WRITE and the PTE_DIRTY bits >>> are set. >>> >>> To test the performance impact of the patch, on an arm64 server >>> machine, run 16 redis-server processes on socket 1 and 16 >>> memtier_benchmark processes on socket 0 with mostly get >>> transactions (that is, redis-server will mostly read memory only). >>> The memory footprint of redis-server is larger than the available >>> memory, so swap out/in will be triggered. Test results show that the >>> patch can avoid most swapping out because the pages are mostly clean. >>> And the benchmark throughput improves ~23.9% in the test. >>> >>> Fixes: 73e86cb03cf2 ("arm64: Move PTE_RDONLY bit handling out of set_pte_at()") >>> Signed-off-by: Huang Ying <ying.huang@linux.alibaba.com> >>> Cc: Catalin Marinas <catalin.marinas@arm.com> >>> Cc: Will Deacon <will@kernel.org> >>> Cc: Anshuman Khandual <anshuman.khandual@arm.com> >>> Cc: Ryan Roberts <ryan.roberts@arm.com> >>> Cc: Gavin Shan <gshan@redhat.com> >>> Cc: Ard Biesheuvel <ardb@kernel.org> >>> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> >>> Cc: Yicong Yang <yangyicong@hisilicon.com> >>> Cc: linux-arm-kernel@lists.infradead.org >>> Cc: linux-kernel@vger.kernel.org >>> --- >>> arch/arm64/include/asm/pgtable.h | 3 ++- >>> 1 file changed, 2 insertions(+), 1 deletion(-) >>> >>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h >>> index aa89c2e67ebc..0944e296dd4a 100644 >>> --- a/arch/arm64/include/asm/pgtable.h >>> +++ b/arch/arm64/include/asm/pgtable.h >>> @@ -293,7 +293,8 @@ static inline pmd_t set_pmd_bit(pmd_t pmd, pgprot_t prot) >>> static inline pte_t pte_mkwrite_novma(pte_t pte) >>> { >>> pte = set_pte_bit(pte, __pgprot(PTE_WRITE)); >>> - pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY)); >>> + if (pte_sw_dirty(pte)) >>> + pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY)); >>> return pte; >>> } >> >> This seems to be the right thing. I recall years ago I grep'ed >> (obviously not hard enough) and most pte_mkwrite() places had a >> pte_mkdirty(). But I missed do_swap_page() and possibly others. >> >> For this patch: >> >> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> >> >> I wonder whether we should also add (as a separate patch): >> >> diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c >> index 830107b6dd08..df1c552ef11c 100644 >> --- a/mm/debug_vm_pgtable.c >> +++ b/mm/debug_vm_pgtable.c >> @@ -101,6 +101,7 @@ static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx) >> WARN_ON(pte_dirty(pte_mkclean(pte_mkdirty(pte)))); >> WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte, args->vma)))); >> WARN_ON(pte_dirty(pte_wrprotect(pte_mkclean(pte)))); >> + WARN_ON(pte_dirty(pte_mkwrite_novma(pte_mkclean(pte)))); >> WARN_ON(!pte_dirty(pte_wrprotect(pte_mkdirty(pte)))); >> } >> >> For completeness, also (and maybe other combinations): >> >> WARN_ON(!pte_write(pte_mkdirty(pte_mkwrite_novma(pte)))); > > Adding similar tests to pte_wrprotect(). > > diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c > index 830107b6dd08..573632ebf304 100644 > --- a/mm/debug_vm_pgtable.c > +++ b/mm/debug_vm_pgtable.c > @@ -102,6 +102,11 @@ static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx) > WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte, args->vma)))); > WARN_ON(pte_dirty(pte_wrprotect(pte_mkclean(pte)))); > WARN_ON(!pte_dirty(pte_wrprotect(pte_mkdirty(pte)))); > + > + WARN_ON(pte_dirty(pte_mkwrite_novma(pte_mkclean(pte)))); > + WARN_ON(!pte_write(pte_mkdirty(pte_mkwrite_novma(pte)))); > + WARN_ON(!pte_write(pte_mkwrite_novma(pte_wrprotect(pte)))); > + WARN_ON(pte_write(pte_wrprotect(pte_mkwrite_novma(pte)))); > } > > static void __init pte_advanced_tests(struct pgtable_debug_args *args) > @@ -195,6 +200,9 @@ static void __init pmd_basic_tests(struct pgtable_debug_args *args, int idx) > WARN_ON(pmd_write(pmd_wrprotect(pmd_mkwrite(pmd, args->vma)))); > WARN_ON(pmd_dirty(pmd_wrprotect(pmd_mkclean(pmd)))); > WARN_ON(!pmd_dirty(pmd_wrprotect(pmd_mkdirty(pmd)))); > + > + WARN_ON(!pmd_write(pmd_mkwrite_novma(pmd_wrprotect(pmd)))); > + WARN_ON(pmd_write(pmd_wrprotect(pmd_mkwrite_novma(pmd)))); > /* > * A huge page does not point to next level page table > * entry. Hence this must qualify as pmd_bad(). >> >> I cc'ed linux-mm in case we missed anything. If nothing raised, I'll >> queue it next week. Can we please send patches touching mm/debug_vm_pgtable.c properly to linux-mm? I wrote tools/testing/selftests/mm/mkdirty.c a while ago. I wonder whether that could also be expressed in these tests here. But my memory comes back: ARCH_HAS_DEBUG_VM_PGTABLE is not set by all architectures (and in particluar not sparc64 which I fixed back then) -- Cheers David / dhildenb ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] arm64, mm: avoid always making PTE dirty in pte_mkwrite() 2025-10-17 18:06 ` [PATCH] arm64, mm: avoid always making PTE dirty in pte_mkwrite() Catalin Marinas 2025-10-20 2:09 ` Anshuman Khandual @ 2025-10-20 11:00 ` Huang, Ying 1 sibling, 0 replies; 5+ messages in thread From: Huang, Ying @ 2025-10-20 11:00 UTC (permalink / raw) To: Catalin Marinas Cc: Will Deacon, Anshuman Khandual, Ryan Roberts, Gavin Shan, Ard Biesheuvel, Matthew Wilcox (Oracle), Yicong Yang, linux-arm-kernel, linux-kernel, linux-mm Hi, Catalin, Catalin Marinas <catalin.marinas@arm.com> writes: > On Wed, Oct 15, 2025 at 10:37:12AM +0800, Huang Ying wrote: >> Current pte_mkwrite_novma() makes PTE dirty unconditionally. This may >> mark some pages that are never written dirty wrongly. For example, >> do_swap_page() may map the exclusive pages with writable and clean PTEs >> if the VMA is writable and the page fault is for read access. >> However, current pte_mkwrite_novma() implementation always dirties the >> PTE. This may cause unnecessary disk writing if the pages are >> never written before being reclaimed. >> >> So, change pte_mkwrite_novma() to clear the PTE_RDONLY bit only if the >> PTE_DIRTY bit is set to make it possible to make the PTE writable and >> clean. >> >> The current behavior was introduced in commit 73e86cb03cf2 ("arm64: >> Move PTE_RDONLY bit handling out of set_pte_at()"). Before that, >> pte_mkwrite() only sets the PTE_WRITE bit, while set_pte_at() only >> clears the PTE_RDONLY bit if both the PTE_WRITE and the PTE_DIRTY bits >> are set. >> >> To test the performance impact of the patch, on an arm64 server >> machine, run 16 redis-server processes on socket 1 and 16 >> memtier_benchmark processes on socket 0 with mostly get >> transactions (that is, redis-server will mostly read memory only). >> The memory footprint of redis-server is larger than the available >> memory, so swap out/in will be triggered. Test results show that the >> patch can avoid most swapping out because the pages are mostly clean. >> And the benchmark throughput improves ~23.9% in the test. >> >> Fixes: 73e86cb03cf2 ("arm64: Move PTE_RDONLY bit handling out of set_pte_at()") >> Signed-off-by: Huang Ying <ying.huang@linux.alibaba.com> >> Cc: Catalin Marinas <catalin.marinas@arm.com> >> Cc: Will Deacon <will@kernel.org> >> Cc: Anshuman Khandual <anshuman.khandual@arm.com> >> Cc: Ryan Roberts <ryan.roberts@arm.com> >> Cc: Gavin Shan <gshan@redhat.com> >> Cc: Ard Biesheuvel <ardb@kernel.org> >> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> >> Cc: Yicong Yang <yangyicong@hisilicon.com> >> Cc: linux-arm-kernel@lists.infradead.org >> Cc: linux-kernel@vger.kernel.org >> --- >> arch/arm64/include/asm/pgtable.h | 3 ++- >> 1 file changed, 2 insertions(+), 1 deletion(-) >> >> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h >> index aa89c2e67ebc..0944e296dd4a 100644 >> --- a/arch/arm64/include/asm/pgtable.h >> +++ b/arch/arm64/include/asm/pgtable.h >> @@ -293,7 +293,8 @@ static inline pmd_t set_pmd_bit(pmd_t pmd, pgprot_t prot) >> static inline pte_t pte_mkwrite_novma(pte_t pte) >> { >> pte = set_pte_bit(pte, __pgprot(PTE_WRITE)); >> - pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY)); >> + if (pte_sw_dirty(pte)) >> + pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY)); >> return pte; >> } > > This seems to be the right thing. I recall years ago I grep'ed > (obviously not hard enough) and most pte_mkwrite() places had a > pte_mkdirty(). But I missed do_swap_page() and possibly others. The do_swap_page() change was introduced in June 2024, quite recently. > For this patch: > > Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Thanks! > I wonder whether we should also add (as a separate patch): > > diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c > index 830107b6dd08..df1c552ef11c 100644 > --- a/mm/debug_vm_pgtable.c > +++ b/mm/debug_vm_pgtable.c > @@ -101,6 +101,7 @@ static void __init pte_basic_tests(struct pgtable_debug_args *args, int idx) > WARN_ON(pte_dirty(pte_mkclean(pte_mkdirty(pte)))); > WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte, args->vma)))); > WARN_ON(pte_dirty(pte_wrprotect(pte_mkclean(pte)))); > + WARN_ON(pte_dirty(pte_mkwrite_novma(pte_mkclean(pte)))); > WARN_ON(!pte_dirty(pte_wrprotect(pte_mkdirty(pte)))); > } > > For completeness, also (and maybe other combinations): > > WARN_ON(!pte_write(pte_mkdirty(pte_mkwrite_novma(pte)))); Sure. Will add another patch for this. > I cc'ed linux-mm in case we missed anything. If nothing raised, I'll > queue it next week. --- Best Regards, Huang, Ying ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-10-20 19:17 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20251015023712.46598-1-ying.huang@linux.alibaba.com>
2025-10-17 18:06 ` [PATCH] arm64, mm: avoid always making PTE dirty in pte_mkwrite() Catalin Marinas
2025-10-20 2:09 ` Anshuman Khandual
2025-10-20 11:04 ` Huang, Ying
2025-10-20 19:17 ` David Hildenbrand
2025-10-20 11:00 ` Huang, Ying
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox