From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEF51C3DA61 for ; Mon, 29 Jul 2024 07:57:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 658FB6B008A; Mon, 29 Jul 2024 03:57:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 607E86B008C; Mon, 29 Jul 2024 03:57:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4CF7B6B0092; Mon, 29 Jul 2024 03:57:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2F6A06B008A for ; Mon, 29 Jul 2024 03:57:06 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id A1E11A0152 for ; Mon, 29 Jul 2024 07:57:05 +0000 (UTC) X-FDA: 82392034410.02.A7736A3 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) by imf16.hostedemail.com (Postfix) with ESMTP id A6881180010 for ; Mon, 29 Jul 2024 07:57:03 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=JJM1oRRZ; spf=pass (imf16.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.169 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722239770; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=c3wfki45IHznha7QobB3MdtHXTqchjTEez4Ly6yQ4WE=; b=e/SgfuPM/31iV1cvtZ2MkC42fJa6fuKphZBwWf8a989GJThelz+/nTs2laeBZ55QuOgYJy y6HwEvGWYWWwKLkSestSl5OLl4PX0dHCJ+07+/lLxQRTJCx+LjzrTchATJmM/bgDCRhcZS j4wY37FziHKgO+ee4ByGmbX4c9O3qms= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722239770; a=rsa-sha256; cv=none; b=a+jjOPdbJFoHZCCMqb6qfa7pn4Ou44T2WFhgwoOjKJWyRaXV2VI8zPSIkihs52+JBkfCpp r8tLwfjMGWEvNysNXQSOrd4+YzoLcjZBHXf/S+GJ5NenBLolz/NTjje5V9MoNKAERFONqL ime7g1uZw5CM2qoTGyocT+mvmHMNFLE= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=JJM1oRRZ; spf=pass (imf16.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.169 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-70d26cb8f71so73251b3a.2 for ; Mon, 29 Jul 2024 00:57:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1722239822; x=1722844622; darn=kvack.org; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=c3wfki45IHznha7QobB3MdtHXTqchjTEez4Ly6yQ4WE=; b=JJM1oRRZsfdJYAogYZg6YPhGSiDUQiiVtHL7iy89IGeoSRnNCoxnpnk2FotK0WKagG uO0sJAI24M/gVzq/AdtY0fwaSXcN1XhxHuIeSPIJcBKOfzk+Ll/PKfa/t0fJuafdcHTZ QgRWGJVGqaEjDjxuxOlDQae4a8woxnjg4o5z94Zm79tbfmUE129LBLMPJ6el30bZLCTe 8TBH1aO3IialD18/ENXCdv701M6LjQo9leDRo+Ps381nt+WQTZFY1NC8Sh2lbXBGs+FQ 8q3mDpZxO/2bT3aiTYvJxXGnGUKUIGkYrUJ6JeLpRWbDL9i6UTcr+dwVmDLCXh4+7Gah yPWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722239822; x=1722844622; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=c3wfki45IHznha7QobB3MdtHXTqchjTEez4Ly6yQ4WE=; b=F1btdYYNcho0DdMjrqal+diYs5rSSf86VTvhCgOg2xz+A79nGjBPA/J2pqYUOu5zeA R4lZ57DLguFI417hNXet5NFdmBspeXOv64y37mLWDAcvPXCveCUFaWfAAp/7fBylKiZ6 aalvjzXtmFM1LKvQxvuDVUHPpuyiey2R0ib53rpFvyTGdPx3ix0Ktcq8MhRnBk0cGM4U j2U7tNjLN1FKOIOw5RPTr9BNO+wsH1wXTdhXXMFOrDBIlfONjP6tf0jFV6gHxqAZvSXu SZbdw+qBT4Uy0pU2XnxYKvjI9WjbWFnr1LtFFTR39c1Jjmn+hU5bY4g2wj7mpbocHReZ XMKw== X-Forwarded-Encrypted: i=1; AJvYcCWHYOM0hsGtQWKTp7gf1OTJz2tmdnW8264W8LNin2/6DgQEnA+xLS3hhy7bRp6vehyQgy9IIVOfsQ==@kvack.org X-Gm-Message-State: AOJu0Yw6e2PGtsxjYSGuG85lwET3sbFLoIILl3LppS5l3nkYZlQ4zdL0 AvXxUzRfxVoeC/YxLw2T4BwGzyckgAYdMg+OUBwYObgUDFf07wGZl92DXehJ/U8= X-Google-Smtp-Source: AGHT+IGe0lJSXq0Vh5OHsO4Tqomt8qIJFlqbE1F0RWa3l62BQCbxWuuvCVwPlATb9n6C8GSnyp3sLQ== X-Received: by 2002:a05:6a00:6f4f:b0:70d:25f1:c086 with SMTP id d2e1a72fcca58-70eac7f5ff3mr8578118b3a.0.1722239821921; Mon, 29 Jul 2024 00:57:01 -0700 (PDT) Received: from [10.255.168.175] ([139.177.225.232]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70ead72b7easm6309789b3a.89.2024.07.29.00.56.52 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 29 Jul 2024 00:57:01 -0700 (PDT) Message-ID: Date: Mon, 29 Jul 2024 15:56:50 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1 1/3] mm: turn USE_SPLIT_PTE_PTLOCKS / USE_SPLIT_PTE_PTLOCKS into Kconfig options Content-Language: en-US To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, linux-fsdevel@vger.kernel.org, Andrew Morton , Oscar Salvador , Peter Xu , Muchun Song , Russell King , Michael Ellerman , Nicholas Piggin , Christophe Leroy , "Naveen N. Rao" , Juergen Gross , Boris Ostrovsky , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Alexander Viro , Christian Brauner References: <20240726150728.3159964-1-david@redhat.com> <20240726150728.3159964-2-david@redhat.com> From: Qi Zheng In-Reply-To: <20240726150728.3159964-2-david@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Stat-Signature: dhitfy3m6jrq1aytcpjrq1qa8knsr85y X-Rspamd-Queue-Id: A6881180010 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1722239823-216003 X-HE-Meta: U2FsdGVkX18eUG6W90tsRG4dV4vAgvNxtkh3nlvN8/aBXUM6q7tJ5uTqPchV/LeU8rkPIy5DTW5yn8XX1l2ADlCBdQlWBNL1HGugnvxr0NkGNhE6eg9Brec/vmUPa1e/IyQTmKs5/xmJP28suZu7PCKwDW0WanV+OknkH8sGcG8VrgnJ3EzkaTvGlBpBFxhHFPki6Ga19ZlzKn5a2gZyfWcNzEWRyHakmmdkm0rJgYtnuyylht4BqaMutu3p692Ja7lR7/ytei/dtM5j1S57bSRf8IuKA9T0mb4pCu12pS64Q26J2xR6JaTe6n0v5DTO5raesmuD4x8xw66ot1Q5zFdSRJlYjWpdVTMXYHDOypxKISHT2LCM7H1/JLOJ/lFjjhOeQ1Cg2S62NrrImHamoqg81ukZ6WzhPczIReqx98MjaK/+5dK7vGuPuOCb+gGeI8kCqK/sKdpRwceabDOY64NHrfrKWrhooXy706fhvsFjod8raP+xR/ZH2+e39CgQ2AircsqB6/H+/wI2AFoVEWmtlNgXCOIgOfTc3tp6EMRUgpqobYk59HhDVDXWVyl2KuC22OAbT0qREAQgLwvcO9dDZS4nS9l4F7oDF0TOY1UWWSmXHySZt3BchXmd9MxfOzt8IAMwNWYkSPxvEkW1qeutE7lxqtXMk/wMpwC7PwqpeLS+1X17fnrYaJ/gOZOMZdHQf0UJ2b2NHavBlm6aB+4Dd5pSZp5u+U2zb5asRwP0V2zP5/vTAk3Y1ucM56/lXJeeck1gbm+Ywvd3dzc4Hzqpms57o2C8Hz4VVQtAoV5GQSFiLw6rCuCRXuKEKlz4JHeN7am2vDD97BcFo3oVDoTM4TpDx3uveKGz9vNtY70+enAZwy1hMnlqv2InIb2HITBVxKv2du4GK7caeNfm/w8iiy2WKkCobcOwXW5lVD64V+/wRZZ05Bp4nFZijNwTL3Mlxy8ceAZszS6qj+7 H6CwI473 D2ycp3WEMRbMe3zRsh8Oq4SBQl02/Ab8BfFafxrPFFTH/wjjxjH8w4Yc7mQJxGX3faKQs3TxKWsFex9Q5UiKQV3+ewj+xT3BvoJhoQu0bjDMFKCiEQWwrUiDM9qaMmzr+rt0WU+2ZaxnrkLzvMvF9FPtnRpjctmMMk+x1gVwBy9rK7CDCUYu2mL80Wr3EeQgjUlSnmLtk3CnP+muD9vMtzROvscVC1kVcvZDkInhfDQ2PESAs1y61E6hgYsJFHg0ndvojTpnig1iZKtPwnHhuQ3wzcu4A/zBxcWh088oLBw0i3Va6CjpW5O9aAIlJIXm91OzpbZR6R9fX6oqUNwwio4gM9DPF06ZwNTrwfSTyaTetPHmbBRCm/m9Xjzmdzf0TpUB5OlvxDXzYr/N3hP65bFbCAKCWWk4g5FbWiYHfysTJzn2lKcCSWOqeOR+qJ+ZW1tGB X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/7/26 23:07, David Hildenbrand wrote: > Let's clean that up a bit and prepare for depending on > CONFIG_SPLIT_PMD_PTLOCKS in other Kconfig options. > > More cleanups would be reasonable (like the arch-specific "depends on" > for CONFIG_SPLIT_PTE_PTLOCKS), but we'll leave that for another day. > > Signed-off-by: David Hildenbrand > --- > arch/arm/mm/fault-armv.c | 6 +++--- > arch/x86/xen/mmu_pv.c | 7 ++++--- > include/linux/mm.h | 8 ++++---- > include/linux/mm_types.h | 2 +- > include/linux/mm_types_task.h | 3 --- > kernel/fork.c | 4 ++-- > mm/Kconfig | 18 +++++++++++------- > mm/memory.c | 2 +- > 8 files changed, 26 insertions(+), 24 deletions(-) That's great. Thanks! Reviewed-by: Qi Zheng > > diff --git a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c > index 2286c2ea60ec4..831793cd6ff94 100644 > --- a/arch/arm/mm/fault-armv.c > +++ b/arch/arm/mm/fault-armv.c > @@ -61,7 +61,7 @@ static int do_adjust_pte(struct vm_area_struct *vma, unsigned long address, > return ret; > } > > -#if USE_SPLIT_PTE_PTLOCKS > +#if defined(CONFIG_SPLIT_PTE_PTLOCKS) > /* > * If we are using split PTE locks, then we need to take the page > * lock here. Otherwise we are using shared mm->page_table_lock > @@ -80,10 +80,10 @@ static inline void do_pte_unlock(spinlock_t *ptl) > { > spin_unlock(ptl); > } > -#else /* !USE_SPLIT_PTE_PTLOCKS */ > +#else /* !defined(CONFIG_SPLIT_PTE_PTLOCKS) */ > static inline void do_pte_lock(spinlock_t *ptl) {} > static inline void do_pte_unlock(spinlock_t *ptl) {} > -#endif /* USE_SPLIT_PTE_PTLOCKS */ > +#endif /* defined(CONFIG_SPLIT_PTE_PTLOCKS) */ > > static int adjust_pte(struct vm_area_struct *vma, unsigned long address, > unsigned long pfn) > diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c > index f1ce39d6d32cb..f4a316894bbb4 100644 > --- a/arch/x86/xen/mmu_pv.c > +++ b/arch/x86/xen/mmu_pv.c > @@ -665,7 +665,7 @@ static spinlock_t *xen_pte_lock(struct page *page, struct mm_struct *mm) > { > spinlock_t *ptl = NULL; > > -#if USE_SPLIT_PTE_PTLOCKS > +#if defined(CONFIG_SPLIT_PTE_PTLOCKS) > ptl = ptlock_ptr(page_ptdesc(page)); > spin_lock_nest_lock(ptl, &mm->page_table_lock); > #endif > @@ -1553,7 +1553,8 @@ static inline void xen_alloc_ptpage(struct mm_struct *mm, unsigned long pfn, > > __set_pfn_prot(pfn, PAGE_KERNEL_RO); > > - if (level == PT_PTE && USE_SPLIT_PTE_PTLOCKS && !pinned) > + if (level == PT_PTE && IS_ENABLED(CONFIG_SPLIT_PTE_PTLOCKS) && > + !pinned) > __pin_pagetable_pfn(MMUEXT_PIN_L1_TABLE, pfn); > > xen_mc_issue(XEN_LAZY_MMU); > @@ -1581,7 +1582,7 @@ static inline void xen_release_ptpage(unsigned long pfn, unsigned level) > if (pinned) { > xen_mc_batch(); > > - if (level == PT_PTE && USE_SPLIT_PTE_PTLOCKS) > + if (level == PT_PTE && IS_ENABLED(CONFIG_SPLIT_PTE_PTLOCKS)) > __pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, pfn); > > __set_pfn_prot(pfn, PAGE_KERNEL); > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 0472a5090b180..dff43101572ec 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2843,7 +2843,7 @@ static inline void pagetable_free(struct ptdesc *pt) > __free_pages(page, compound_order(page)); > } > > -#if USE_SPLIT_PTE_PTLOCKS > +#if defined(CONFIG_SPLIT_PTE_PTLOCKS) > #if ALLOC_SPLIT_PTLOCKS > void __init ptlock_cache_init(void); > bool ptlock_alloc(struct ptdesc *ptdesc); > @@ -2895,7 +2895,7 @@ static inline bool ptlock_init(struct ptdesc *ptdesc) > return true; > } > > -#else /* !USE_SPLIT_PTE_PTLOCKS */ > +#else /* !defined(CONFIG_SPLIT_PTE_PTLOCKS) */ > /* > * We use mm->page_table_lock to guard all pagetable pages of the mm. > */ > @@ -2906,7 +2906,7 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pte_t *pte) > static inline void ptlock_cache_init(void) {} > static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; } > static inline void ptlock_free(struct ptdesc *ptdesc) {} > -#endif /* USE_SPLIT_PTE_PTLOCKS */ > +#endif /* defined(CONFIG_SPLIT_PTE_PTLOCKS) */ > > static inline bool pagetable_pte_ctor(struct ptdesc *ptdesc) > { > @@ -2966,7 +2966,7 @@ pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, > ((unlikely(pmd_none(*(pmd))) && __pte_alloc_kernel(pmd))? \ > NULL: pte_offset_kernel(pmd, address)) > > -#if USE_SPLIT_PMD_PTLOCKS > +#if defined(CONFIG_SPLIT_PMD_PTLOCKS) > > static inline struct page *pmd_pgtable_page(pmd_t *pmd) > { > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index 4854249792545..165c58b12ccc9 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -947,7 +947,7 @@ struct mm_struct { > #ifdef CONFIG_MMU_NOTIFIER > struct mmu_notifier_subscriptions *notifier_subscriptions; > #endif > -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && !USE_SPLIT_PMD_PTLOCKS > +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && !defined(CONFIG_SPLIT_PMD_PTLOCKS) > pgtable_t pmd_huge_pte; /* protected by page_table_lock */ > #endif > #ifdef CONFIG_NUMA_BALANCING > diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h > index a2f6179b672b8..bff5706b76e14 100644 > --- a/include/linux/mm_types_task.h > +++ b/include/linux/mm_types_task.h > @@ -16,9 +16,6 @@ > #include > #endif > > -#define USE_SPLIT_PTE_PTLOCKS (NR_CPUS >= CONFIG_SPLIT_PTLOCK_CPUS) > -#define USE_SPLIT_PMD_PTLOCKS (USE_SPLIT_PTE_PTLOCKS && \ > - IS_ENABLED(CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK)) > #define ALLOC_SPLIT_PTLOCKS (SPINLOCK_SIZE > BITS_PER_LONG/8) > > /* > diff --git a/kernel/fork.c b/kernel/fork.c > index a8362c26ebcb0..216ce9ba4f4e6 100644 > --- a/kernel/fork.c > +++ b/kernel/fork.c > @@ -832,7 +832,7 @@ static void check_mm(struct mm_struct *mm) > pr_alert("BUG: non-zero pgtables_bytes on freeing mm: %ld\n", > mm_pgtables_bytes(mm)); > > -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && !USE_SPLIT_PMD_PTLOCKS > +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && !defined(CONFIG_SPLIT_PMD_PTLOCKS) > VM_BUG_ON_MM(mm->pmd_huge_pte, mm); > #endif > } > @@ -1276,7 +1276,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, > RCU_INIT_POINTER(mm->exe_file, NULL); > mmu_notifier_subscriptions_init(mm); > init_tlb_flush_pending(mm); > -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && !USE_SPLIT_PMD_PTLOCKS > +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && !defined(CONFIG_SPLIT_PMD_PTLOCKS) > mm->pmd_huge_pte = NULL; > #endif > mm_init_uprobes_state(mm); > diff --git a/mm/Kconfig b/mm/Kconfig > index b72e7d040f789..7b716ac802726 100644 > --- a/mm/Kconfig > +++ b/mm/Kconfig > @@ -585,17 +585,21 @@ config ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE > # at the same time (e.g. copy_page_range()). > # DEBUG_SPINLOCK and DEBUG_LOCK_ALLOC spinlock_t also enlarge struct page. > # > -config SPLIT_PTLOCK_CPUS > - int > - default "999999" if !MMU > - default "999999" if ARM && !CPU_CACHE_VIPT > - default "999999" if PARISC && !PA20 > - default "999999" if SPARC32 > - default "4" > +config SPLIT_PTE_PTLOCKS > + def_bool y > + depends on MMU > + depends on NR_CPUS >= 4 > + depends on !ARM || CPU_CACHE_VIPT > + depends on !PARISC || PA20 > + depends on !SPARC32 > > config ARCH_ENABLE_SPLIT_PMD_PTLOCK > bool > > +config SPLIT_PMD_PTLOCKS > + def_bool y > + depends on SPLIT_PTE_PTLOCKS && ARCH_ENABLE_SPLIT_PMD_PTLOCK > + > # > # support for memory balloon > config MEMORY_BALLOON > diff --git a/mm/memory.c b/mm/memory.c > index 833d2cad6eb29..714589582fe15 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -6559,7 +6559,7 @@ long copy_folio_from_user(struct folio *dst_folio, > } > #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */ > > -#if USE_SPLIT_PTE_PTLOCKS && ALLOC_SPLIT_PTLOCKS > +#if defined(CONFIG_SPLIT_PTE_PTLOCKS) && ALLOC_SPLIT_PTLOCKS > > static struct kmem_cache *page_ptl_cachep; >