linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Qi Zheng <zhengqi.arch@bytedance.com>
To: Jann Horn <jannh@google.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	david@redhat.com, hughd@google.com, willy@infradead.org,
	mgorman@suse.de, muchun.song@linux.dev, vbabka@kernel.org,
	akpm@linux-foundation.org, zokeefe@google.com,
	rientjes@google.com, peterx@redhat.com, catalin.marinas@arm.com,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org
Subject: Re: [PATCH v2 6/7] x86: mm: free page table pages by RCU instead of semi RCU
Date: Fri, 8 Nov 2024 15:38:41 +0800	[thread overview]
Message-ID: <e308363a-0c1e-421f-a35e-5bb750992554@bytedance.com> (raw)
In-Reply-To: <CAG48ez0XhKnr3uVODu+iihRi4XwLupy=YX8BHa==1Y-ZvrmKjg@mail.gmail.com>

Hi Jann,

On 2024/11/8 06:39, Jann Horn wrote:
> +x86 MM maintainers - x86@kernel.org was already cc'ed, but I don't
> know if that is enough for them to see it, and I haven't seen them
> comment on this series yet; I think you need an ack from them for this
> change.

Yes, thanks to Jann for cc-ing x86 MM maintainers, and look forward to
their feedback!

> 
> On Thu, Oct 31, 2024 at 9:14 AM Qi Zheng <zhengqi.arch@bytedance.com> wrote:
>> Now, if CONFIG_MMU_GATHER_RCU_TABLE_FREE is selected, the page table pages
>> will be freed by semi RCU, that is:
>>
>>   - batch table freeing: asynchronous free by RCU
>>   - single table freeing: IPI + synchronous free
>>
>> In this way, the page table can be lockless traversed by disabling IRQ in
>> paths such as fast GUP. But this is not enough to free the empty PTE page
>> table pages in paths other that munmap and exit_mmap path, because IPI
>> cannot be synchronized with rcu_read_lock() in pte_offset_map{_lock}().
>>
>> In preparation for supporting empty PTE page table pages reclaimation,
>> let single table also be freed by RCU like batch table freeing. Then we
>> can also use pte_offset_map() etc to prevent PTE page from being freed.
> 
> I applied your series locally and followed the page table freeing path
> that the reclaim feature would use on x86-64. Looks like it goes like
> this with the series applied:

Yes.

> 
> free_pte
>    pte_free_tlb
>      __pte_free_tlb
>        ___pte_free_tlb
>          paravirt_tlb_remove_table
>            tlb_remove_table [!CONFIG_PARAVIRT, Xen PV, Hyper-V, KVM]
>              [no-free-memory slowpath:]
>                tlb_table_invalidate
>                tlb_remove_table_one
>                  tlb_remove_table_sync_one [does IPI for GUP-fast]

		   ^
		   It seems that this step can be ommitted when
		   CONFIG_PT_RECLAIM is enabled, because GUP-fast will
		   disable IRQ, which can also serve as the RCU critical
		   section.

>                  __tlb_remove_table_one [frees via RCU]
>              [fastpath:]
>                tlb_table_flush
>                  tlb_remove_table_free [frees via RCU]
>            native_tlb_remove_table [CONFIG_PARAVIRT on native]
>              tlb_remove_table [see above]
> 
> Basically, the only remaining case in which
> paravirt_tlb_remove_table() does not use tlb_remove_table() with RCU
> delay is !CONFIG_PARAVIRT && !CONFIG_PT_RECLAIM. Given that
> CONFIG_PT_RECLAIM is defined as "default y" when supported, I guess
> that means X86's direct page table freeing path will almost never be
> used? If it stays that way and the X86 folks don't see a performance
> impact from using RCU to free tables on munmap() / process exit, I
> guess we might want to get rid of the direct page table freeing path
> on x86 at some point to simplify things...

In theory, using RCU to asynchronously free PTE pages should make
munmap() / process exit path faster. I can try to grab some data.

> 
> (That simplification might also help prepare for Intel Remote Action
> Request, if that is a thing people want...)

If so, even better!

> 
>> Like pte_free_defer(), we can also safely use ptdesc->pt_rcu_head to free
>> the page table pages:
>>
>>   - The pt_rcu_head is unioned with pt_list and pmd_huge_pte.
>>
>>   - For pt_list, it is used to manage the PGD page in x86. Fortunately
>>     tlb_remove_table() will not be used for free PGD pages, so it is safe
>>     to use pt_rcu_head.
>>
>>   - For pmd_huge_pte, we will do zap_deposited_table() before freeing the
>>     PMD page, so it is also safe.
> 
> Please also update the comments on "struct ptdesc" accordingly.

OK, will do.

> 
>> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
>> ---
>>   arch/x86/include/asm/tlb.h | 19 +++++++++++++++++++
>>   arch/x86/kernel/paravirt.c |  7 +++++++
>>   arch/x86/mm/pgtable.c      | 10 +++++++++-
>>   mm/mmu_gather.c            |  9 ++++++++-
>>   4 files changed, 43 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h
>> index 580636cdc257b..e223b53a8b190 100644
>> --- a/arch/x86/include/asm/tlb.h
>> +++ b/arch/x86/include/asm/tlb.h
>> @@ -34,4 +34,23 @@ static inline void __tlb_remove_table(void *table)
>>          free_page_and_swap_cache(table);
>>   }
>>
>> +#ifdef CONFIG_PT_RECLAIM
>> +static inline void __tlb_remove_table_one_rcu(struct rcu_head *head)
>> +{
>> +       struct page *page;
>> +
>> +       page = container_of(head, struct page, rcu_head);
>> +       free_page_and_swap_cache(page);
>> +}
> 
> Why free_page_and_swap_cache()? Page tables shouldn't have swap cache,
> so I think something like put_page() would do the job.

Ah, I just did the same thing as __tlb_remove_table(). But I also
have the same doubt as you, why does __tlb_remove_table() need to
call free_page_and_swap_cache() instead of put_page().

Thanks,
Qi

> 
>> +static inline void __tlb_remove_table_one(void *table)
>> +{
>> +       struct page *page;
>> +
>> +       page = table;
>> +       call_rcu(&page->rcu_head, __tlb_remove_table_one_rcu);
>> +}
>> +#define __tlb_remove_table_one __tlb_remove_table_one
>> +#endif /* CONFIG_PT_RECLAIM */
>> +
>>   #endif /* _ASM_X86_TLB_H */
>> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
>> index fec3815335558..89688921ea62e 100644
>> --- a/arch/x86/kernel/paravirt.c
>> +++ b/arch/x86/kernel/paravirt.c
>> @@ -59,10 +59,17 @@ void __init native_pv_lock_init(void)
>>                  static_branch_enable(&virt_spin_lock_key);
>>   }
>>
>> +#ifndef CONFIG_PT_RECLAIM
>>   static void native_tlb_remove_table(struct mmu_gather *tlb, void *table)
>>   {
>>          tlb_remove_page(tlb, table);
>>   }
>> +#else
>> +static void native_tlb_remove_table(struct mmu_gather *tlb, void *table)
>> +{
>> +       tlb_remove_table(tlb, table);
>> +}
>> +#endif
>>
>>   struct static_key paravirt_steal_enabled;
>>   struct static_key paravirt_steal_rq_enabled;
>> diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
>> index 5745a354a241c..69a357b15974a 100644
>> --- a/arch/x86/mm/pgtable.c
>> +++ b/arch/x86/mm/pgtable.c
>> @@ -19,12 +19,20 @@ EXPORT_SYMBOL(physical_mask);
>>   #endif
>>
>>   #ifndef CONFIG_PARAVIRT
>> +#ifndef CONFIG_PT_RECLAIM
>>   static inline
>>   void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table)
>>   {
>>          tlb_remove_page(tlb, table);
>>   }
>> -#endif
>> +#else
>> +static inline
>> +void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table)
>> +{
>> +       tlb_remove_table(tlb, table);
>> +}
>> +#endif /* !CONFIG_PT_RECLAIM */
>> +#endif /* !CONFIG_PARAVIRT */
>>
>>   gfp_t __userpte_alloc_gfp = GFP_PGTABLE_USER | PGTABLE_HIGHMEM;
>>
>> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
>> index 99b3e9408aa0f..d948479ca09e6 100644
>> --- a/mm/mmu_gather.c
>> +++ b/mm/mmu_gather.c
>> @@ -311,10 +311,17 @@ static inline void tlb_table_invalidate(struct mmu_gather *tlb)
>>          }
>>   }
>>
>> +#ifndef __tlb_remove_table_one
>> +static inline void __tlb_remove_table_one(void *table)
>> +{
>> +       __tlb_remove_table(table);
>> +}
>> +#endif
>> +
>>   static void tlb_remove_table_one(void *table)
>>   {
>>          tlb_remove_table_sync_one();
>> -       __tlb_remove_table(table);
>> +       __tlb_remove_table_one(table);
>>   }
>>
>>   static void tlb_table_flush(struct mmu_gather *tlb)
>> --
>> 2.20.1
>>


  reply	other threads:[~2024-11-08  7:38 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-31  8:13 [PATCH v2 0/7] synchronously scan and reclaim empty user PTE pages Qi Zheng
2024-10-31  8:13 ` [PATCH v2 1/7] mm: khugepaged: retract_page_tables() use pte_offset_map_rw_nolock() Qi Zheng
2024-11-06 21:48   ` Jann Horn
2024-11-07  7:54     ` Qi Zheng
2024-11-07 17:57       ` Jann Horn
2024-11-08  6:31         ` Qi Zheng
2024-10-31  8:13 ` [PATCH v2 2/7] mm: introduce zap_nonpresent_ptes() Qi Zheng
2024-11-06 21:48   ` Jann Horn
2024-11-12 16:58   ` David Hildenbrand
2024-10-31  8:13 ` [PATCH v2 3/7] mm: introduce do_zap_pte_range() Qi Zheng
2024-11-07 21:50   ` Jann Horn
2024-11-12 17:00   ` David Hildenbrand
2024-11-13  2:40     ` Qi Zheng
2024-11-13 11:43       ` David Hildenbrand
2024-11-13 12:19         ` Qi Zheng
2024-11-14  3:09         ` Qi Zheng
2024-11-14  4:12           ` Qi Zheng
2024-10-31  8:13 ` [PATCH v2 4/7] mm: make zap_pte_range() handle full within-PMD range Qi Zheng
2024-11-07 21:46   ` Jann Horn
2024-10-31  8:13 ` [PATCH v2 5/7] mm: pgtable: try to reclaim empty PTE page in madvise(MADV_DONTNEED) Qi Zheng
2024-11-07 23:35   ` Jann Horn
2024-11-08  7:13     ` Qi Zheng
2024-11-08 18:04       ` Jann Horn
2024-11-09  3:07         ` Qi Zheng
2024-10-31  8:13 ` [PATCH v2 6/7] x86: mm: free page table pages by RCU instead of semi RCU Qi Zheng
2024-11-07 22:39   ` Jann Horn
2024-11-08  7:38     ` Qi Zheng [this message]
2024-11-08 20:09       ` Jann Horn
2024-11-09  3:14         ` Qi Zheng
2024-11-13 11:26       ` Qi Zheng
2024-10-31  8:13 ` [PATCH v2 7/7] x86: select ARCH_SUPPORTS_PT_RECLAIM if X86_64 Qi Zheng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e308363a-0c1e-421f-a35e-5bb750992554@bytedance.com \
    --to=zhengqi.arch@bytedance.com \
    --cc=akpm@linux-foundation.org \
    --cc=catalin.marinas@arm.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=hughd@google.com \
    --cc=jannh@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mgorman@suse.de \
    --cc=muchun.song@linux.dev \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rientjes@google.com \
    --cc=vbabka@kernel.org \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    --cc=zokeefe@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox