From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5693C3DA49 for ; Sun, 28 Jul 2024 12:47:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3A14C6B0089; Sun, 28 Jul 2024 08:47:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3501B6B008A; Sun, 28 Jul 2024 08:47:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F0996B008C; Sun, 28 Jul 2024 08:47:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 01B7E6B0089 for ; Sun, 28 Jul 2024 08:47:48 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 8DA81A2BF5 for ; Sun, 28 Jul 2024 12:47:48 +0000 (UTC) X-FDA: 82389138216.12.F2888AF Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf09.hostedemail.com (Postfix) with ESMTP id 37F36140017 for ; Sun, 28 Jul 2024 12:47:45 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=DKSOfRoy; spf=pass (imf09.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722170815; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MEksAleoDxSg5szI8wS/hItpqNepu8bdy/nDGhpgGks=; b=TUEmJsPmXepfLW2flMZbozZ5/pVpWKFJ9pXTI2tTsjR4YBaP4JAZNp8hOpjvLN8NHJDR1Z 0r7i6BT5BTo2og5VDlu1i0+yI93kOT5goiNz4IVXwUXYV5AiHR57ALKaTRK19UXtkp10pn QPCuEn2vnTn7RCrIHwbqtE5f4epWO+0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722170815; a=rsa-sha256; cv=none; b=KgyvLoXazv3v45Dmhj8VIjTSqovaVWaIX4hmDBjhZMLrn5CUibH7MHdYoihar2+23yabcn FnLU98VjFBPlXTBKfWikAhABkbQJVD5Ep5IOChRd4wr7CisWuq6nrec8p2RBJBvSoh5vrZ ZzukbY4mYsBBAa+1QJ3tR5xXFjc7s14= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=DKSOfRoy; spf=pass (imf09.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 1520FCE01FE; Sun, 28 Jul 2024 12:47:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1214EC116B1; Sun, 28 Jul 2024 12:47:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722170862; bh=7DAw1bh/7KLG2aowoCQ2qKx/EFyJdeQ1d+TtPaE81w4=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=DKSOfRoyAcagm/ZCjuUtmZe1LP1/m3qOMpkt+Ja038rI7l8A3gAIiS9vTlhyRlcph q9W3Gs/md02my2rqjor8RoVRuhKeRng2CLjeuQpPwElHl4VIJ1PE8mhHy3nlfCsgPo diba//3nG7207nq7Hoz29mmSrKFjel41IEd1K+OUbqJ9ji9TOMZcPpMQo+WMhANpud 6Q847BLXlRgpdKyKpfoIprH/01r3UrVLIMjJqRG5P1sgBD0Z0nzmDaGOzqSAcDzpJq 76zEEQv9n49HVtrhliR31CW/giJaLvR0UAqUXFsYST53aU5H48QgjliObfxaz9+zTj RELQP2y7z3fMw== Date: Sun, 28 Jul 2024 15:47:19 +0300 From: Mike Rapoport To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, linux-fsdevel@vger.kernel.org, Andrew Morton , Oscar Salvador , Peter Xu , Muchun Song , Russell King , Michael Ellerman , Nicholas Piggin , Christophe Leroy , "Naveen N. Rao" , Juergen Gross , Boris Ostrovsky , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Alexander Viro , Christian Brauner Subject: Re: [PATCH v1 2/3] mm/hugetlb: enforce that PMD PT sharing has split PMD PT locks Message-ID: References: <20240726150728.3159964-1-david@redhat.com> <20240726150728.3159964-3-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240726150728.3159964-3-david@redhat.com> X-Rspamd-Queue-Id: 37F36140017 X-Stat-Signature: ngxp87kikugxz5tk9kcsn8ehcpm6kdg3 X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1722170865-204266 X-HE-Meta: U2FsdGVkX18pt9X0pGgTcTHTzcgtOOFmHLpXIgjKuC+khsCYXFR0r+1Qn3PoCacHJGbDlOgcxU0O6+hcTNEaz9CLsxSSwoZhaMot6ms/6vlopqzZnrGCu5LwNx4F2m1/e2RjZojJli7wjVEPXOAPt8IblArt9pOUyRlOYt9xUKOQr+jaaL4Zh3Eu6URIhLKbjrMl3bu/D1rGFRn8HIuiHxUZmrH3nHc2BSpDNyq5hOvmpu1YAgMpWx8IrGLBVxABqfy52cy26je6qx2EWWHFnIXXvXWk96v83SIzNJyjdkfK0P8X26CNMlvqCftR3MbVlv4K3mpy2sJa2azYtJOLk4e4xxh3FcyrSheSdxgBRklRWkQVPkp5+VYTBZ17aByEtPld40lI4XTq7GJo4zYuK7L+r2IbWtbmQB+f65X/D6gLugrUMWUSkt/vIH4L5ozw0/eZwupLZ37qGZPkuIlxhLNUsKxycDpQ+UkztfSh8yEbY2AH4nHKpLI13iIj7DFDpvz3U9eP50QCxnN7NWitojFHxHHJ0wDzpT3XvKWbMTfy2bX7dF8/x+fMVIW/6VtKx2fw57rLZwxoYyCqu8AaRpAIYxw7v4KUdmfk5xi/JXjDhIyLacu5h3DmCXuBo7tv+g6hTtmEahN/qZyVcyEditMJDRlXcQlJtZRH8v58THAf2j9clXItvJVaqOrtkGn8rqYsY6Pk5DJ9N5JjkWvfyr53+IjIm7UefsDd50gpFDEvaukitVDIsKa2ancyttGhpL+fgRXCKYOCMtdSU6y3X8odVuLkakdvA3tSn9nTlXevj9f/R+bYoxV78LTat7da/T88yvWpmox6fgDqY9Fw7B5OQlEs1R34OdKi/8DxCgZCaTZQt4Naa5tVjZ+Qt8ok+qQXXLWRRvjFkdiIHSFeoo9h7wMTtMdoVSnxV/NDNyT/ABjVQWixVirZgvrgQ+Ipuz1y2Jp0zqcI+rrYf3P cpimvlN4 r7xDw8CnfBMtfsoDY1ynDAAeLtq4rIdHL85BKZeVpuQwqUzqYCgfobb/JPLTjm8eA6pYY0+q0tcnkcSVdwlyuZFC5GW0DwpWsyHnHQyk+0m/acd9qISauz0mm34T/i+/dezober4k83B+9SddDgfagvYQF5o6ef5h1G/AdntdZbvqINIzZ1sE2aJI1oFQPgRRElJB12p2x54rdNCZegvlfpasgNpuxT42tZD2kRp+ucMEW49ZMvH7z6suXlXTrIu6mvl6TGq13D5Kl+A68Eol8XTwWB+JvMPd1ATSEoP+M4tuJoU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jul 26, 2024 at 05:07:27PM +0200, David Hildenbrand wrote: > Sharing page tables between processes but falling back to per-MM page > table locks cannot possibly work. > > So, let's make sure that we do have split PMD locks by adding a new > Kconfig option and letting that depend on CONFIG_SPLIT_PMD_PTLOCKS. > > Signed-off-by: David Hildenbrand Acked-by: Mike Rapoport (Microsoft) > --- > fs/Kconfig | 4 ++++ > include/linux/hugetlb.h | 5 ++--- > mm/hugetlb.c | 8 ++++---- > 3 files changed, 10 insertions(+), 7 deletions(-) > > diff --git a/fs/Kconfig b/fs/Kconfig > index a46b0cbc4d8f6..0e4efec1d92e6 100644 > --- a/fs/Kconfig > +++ b/fs/Kconfig > @@ -288,6 +288,10 @@ config HUGETLB_PAGE_OPTIMIZE_VMEMMAP > depends on ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP > depends on SPARSEMEM_VMEMMAP > > +config HUGETLB_PMD_PAGE_TABLE_SHARING > + def_bool HUGETLB_PAGE > + depends on ARCH_WANT_HUGE_PMD_SHARE && SPLIT_PMD_PTLOCKS > + > config ARCH_HAS_GIGANTIC_PAGE > bool > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h > index da800e56fe590..4d2f3224ff027 100644 > --- a/include/linux/hugetlb.h > +++ b/include/linux/hugetlb.h > @@ -1243,7 +1243,7 @@ static inline __init void hugetlb_cma_reserve(int order) > } > #endif > > -#ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE > +#ifdef CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING > static inline bool hugetlb_pmd_shared(pte_t *pte) > { > return page_count(virt_to_page(pte)) > 1; > @@ -1279,8 +1279,7 @@ bool __vma_private_lock(struct vm_area_struct *vma); > static inline pte_t * > hugetlb_walk(struct vm_area_struct *vma, unsigned long addr, unsigned long sz) > { > -#if defined(CONFIG_HUGETLB_PAGE) && \ > - defined(CONFIG_ARCH_WANT_HUGE_PMD_SHARE) && defined(CONFIG_LOCKDEP) > +#if defined(CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING) && defined(CONFIG_LOCKDEP) > struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; > > /* > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 0858a18272073..c4d94e122c41f 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -7211,7 +7211,7 @@ long hugetlb_unreserve_pages(struct inode *inode, long start, long end, > return 0; > } > > -#ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE > +#ifdef CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING > static unsigned long page_table_shareable(struct vm_area_struct *svma, > struct vm_area_struct *vma, > unsigned long addr, pgoff_t idx) > @@ -7373,7 +7373,7 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, > return 1; > } > > -#else /* !CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ > +#else /* !CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING */ > > pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, > unsigned long addr, pud_t *pud) > @@ -7396,7 +7396,7 @@ bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr) > { > return false; > } > -#endif /* CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ > +#endif /* CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING */ > > #ifdef CONFIG_ARCH_WANT_GENERAL_HUGETLB > pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, > @@ -7494,7 +7494,7 @@ unsigned long hugetlb_mask_last_page(struct hstate *h) > /* See description above. Architectures can provide their own version. */ > __weak unsigned long hugetlb_mask_last_page(struct hstate *h) > { > -#ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE > +#ifdef CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING > if (huge_page_size(h) == PMD_SIZE) > return PUD_SIZE - PMD_SIZE; > #endif > -- > 2.45.2 > > -- Sincerely yours, Mike.