From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7181CCA473 for ; Tue, 21 Jun 2022 09:44:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 16B336B0072; Tue, 21 Jun 2022 05:44:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 11ABC8E0002; Tue, 21 Jun 2022 05:44:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 009BA8E0001; Tue, 21 Jun 2022 05:44:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E63336B0072 for ; Tue, 21 Jun 2022 05:44:19 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id B4F86210F4 for ; Tue, 21 Jun 2022 09:44:19 +0000 (UTC) X-FDA: 79601757438.06.8AF6937 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf11.hostedemail.com (Postfix) with ESMTP id F36DC40014 for ; Tue, 21 Jun 2022 09:44:18 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 57AC3165C; Tue, 21 Jun 2022 02:44:18 -0700 (PDT) Received: from [10.162.41.7] (unknown [10.162.41.7]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 380563F5A1; Tue, 21 Jun 2022 02:44:15 -0700 (PDT) Message-ID: <8c143d11-2371-23db-0476-0df569faaff9@arm.com> Date: Tue, 21 Jun 2022 15:14:13 +0530 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: Re: [PATCH V3 1/2] mm/mmap: Restrict generic protection_map[] array visibility Content-Language: en-US To: Christophe Leroy , "linux-mm@kvack.org" Cc: "hch@infradead.org" , Andrew Morton , "linux-kernel@vger.kernel.org" , Christoph Hellwig References: <20220616040924.1022607-1-anshuman.khandual@arm.com> <20220616040924.1022607-2-anshuman.khandual@arm.com> <4830e415-cdbb-7050-ebd6-7480493655ef@csgroup.eu> <2ade2595-d03b-6027-f93a-fa8f7c39654e@csgroup.eu> From: Anshuman Khandual In-Reply-To: <2ade2595-d03b-6027-f93a-fa8f7c39654e@csgroup.eu> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655804659; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cLJNbvr7vLRLmhHhhaoo1JpPVl6+frg9G5XHCIE6DIA=; b=rSRf8v99BGVMcJrV0vMWuEo8VRLQBeavaoFluc41mz7jM6Alg1NQirbIFSXywhoxuGQQPI LVm5Wczyf13FsFw2YtbMsUny38UPY5y6juhXGPQW7crrxl/pT0NUjX7rhzhhBQ0nJvB2N/ ZrIl/ZxuvzGXbT212+eIZhTJWFw6QzY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655804659; a=rsa-sha256; cv=none; b=mDPXH9pmeM8ZS+ub7yR+rCRCYA/8lnBcmU0K54FkK/BVNZGF+r8NXhY2AkHQx9d2SyLmiU JIUumm+3ztEiU38FEk700O35QeMQ7HG9il3yThkEpUhVWEp62LuQTnmTFKvINvlsoLf6Mf RPaZpiAtBCEE6Rhmr8Xp7SrEsyMW47I= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; spf=pass (imf11.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com; dmarc=pass (policy=none) header.from=arm.com X-Stat-Signature: 3knsfi79apdebupsck5k57kx79wccqb8 Authentication-Results: imf11.hostedemail.com; dkim=none; spf=pass (imf11.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com; dmarc=pass (policy=none) header.from=arm.com X-Rspamd-Queue-Id: F36DC40014 X-Rspamd-Server: rspam02 X-Rspam-User: X-HE-Tag: 1655804658-965321 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 6/20/22 12:11, Christophe Leroy wrote: > > > Le 20/06/2022 à 07:16, Anshuman Khandual a écrit : >> >> >> On 6/16/22 11:05, Christophe Leroy wrote: >>> >>> Le 16/06/2022 à 06:09, Anshuman Khandual a écrit : >>>> Restrict generic protection_map[] array visibility only for platforms which >>>> do not enable ARCH_HAS_VM_GET_PAGE_PROT. For other platforms that do define >>>> their own vm_get_page_prot() enabling ARCH_HAS_VM_GET_PAGE_PROT, could have >>>> their private static protection_map[] still implementing an array look up. >>>> These private protection_map[] array could do without __PXXX/__SXXX macros, >>>> making them redundant and dropping them off as well. >>>> >>>> But platforms which do not define their custom vm_get_page_prot() enabling >>>> ARCH_HAS_VM_GET_PAGE_PROT, will still have to provide __PXXX/__SXXX macros. >>>> >>>> Cc: Andrew Morton >>>> Cc: linux-mm@kvack.org >>>> Cc: linux-kernel@vger.kernel.org >>>> Acked-by: Christoph Hellwig >>>> Signed-off-by: Anshuman Khandual >>>> --- >>>> arch/arm64/include/asm/pgtable-prot.h | 18 ------------------ >>>> arch/arm64/mm/mmap.c | 21 +++++++++++++++++++++ >>>> arch/powerpc/include/asm/pgtable.h | 2 ++ >>>> arch/powerpc/mm/book3s64/pgtable.c | 20 ++++++++++++++++++++ >>>> arch/sparc/include/asm/pgtable_64.h | 19 ------------------- >>>> arch/sparc/mm/init_64.c | 3 +++ >>>> arch/x86/include/asm/pgtable_types.h | 19 ------------------- >>>> arch/x86/mm/pgprot.c | 19 +++++++++++++++++++ >>>> include/linux/mm.h | 2 ++ >>>> mm/mmap.c | 2 +- >>>> 10 files changed, 68 insertions(+), 57 deletions(-) >>>> >>>> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h >>>> index d564d0ecd4cd..8ed2a80c896e 100644 >>>> --- a/arch/powerpc/include/asm/pgtable.h >>>> +++ b/arch/powerpc/include/asm/pgtable.h >>>> @@ -21,6 +21,7 @@ struct mm_struct; >>>> #endif /* !CONFIG_PPC_BOOK3S */ >>>> >>>> /* Note due to the way vm flags are laid out, the bits are XWR */ >>>> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT >>> This ifdef if not necessary for now, it doesn't matter if __P000 etc >>> still exist thought not used. >>> >>>> #define __P000 PAGE_NONE >>>> #define __P001 PAGE_READONLY >>>> #define __P010 PAGE_COPY >>>> @@ -38,6 +39,7 @@ struct mm_struct; >>>> #define __S101 PAGE_READONLY_X >>>> #define __S110 PAGE_SHARED_X >>>> #define __S111 PAGE_SHARED_X >>>> +#endif >>>> >>>> #ifndef __ASSEMBLY__ >>>> >>>> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c >>>> index 7b9966402b25..d3b019b95c1d 100644 >>>> --- a/arch/powerpc/mm/book3s64/pgtable.c >>>> +++ b/arch/powerpc/mm/book3s64/pgtable.c >>>> @@ -551,6 +551,26 @@ unsigned long memremap_compat_align(void) >>>> EXPORT_SYMBOL_GPL(memremap_compat_align); >>>> #endif >>>> >>>> +/* Note due to the way vm flags are laid out, the bits are XWR */ >>>> +static const pgprot_t protection_map[16] = { >>>> + [VM_NONE] = PAGE_NONE, >>>> + [VM_READ] = PAGE_READONLY, >>>> + [VM_WRITE] = PAGE_COPY, >>>> + [VM_WRITE | VM_READ] = PAGE_COPY, >>>> + [VM_EXEC] = PAGE_READONLY_X, >>>> + [VM_EXEC | VM_READ] = PAGE_READONLY_X, >>>> + [VM_EXEC | VM_WRITE] = PAGE_COPY_X, >>>> + [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_X, >>>> + [VM_SHARED] = PAGE_NONE, >>>> + [VM_SHARED | VM_READ] = PAGE_READONLY, >>>> + [VM_SHARED | VM_WRITE] = PAGE_SHARED, >>>> + [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED, >>>> + [VM_SHARED | VM_EXEC] = PAGE_READONLY_X, >>>> + [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_X, >>>> + [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED_X, >>>> + [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_X >>>> +}; >>>> + >>> There is not much point is first additing that here and then move it >>> elsewhere in the second patch. >>> >>> I think with my suggestion to use #ifdef __P000 as a guard, the powerpc >>> changes could go in a single patch. >>> >>>> pgprot_t vm_get_page_prot(unsigned long vm_flags) >>>> { >>>> unsigned long prot = pgprot_val(protection_map[vm_flags & >>>> diff --git a/mm/mmap.c b/mm/mmap.c >>>> index 61e6135c54ef..e66920414945 100644 >>>> --- a/mm/mmap.c >>>> +++ b/mm/mmap.c >>>> @@ -101,6 +101,7 @@ static void unmap_region(struct mm_struct *mm, >>>> * w: (no) no >>>> * x: (yes) yes >>>> */ >>>> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT >>> You should use #ifdef __P000 instead, that way you could migrate >>> architectures one by one. >> >> If vm_get_page_prot() gets moved into all platforms, wondering what would be >> the preferred method to organize this patch series ? >> >> 1. Move protection_map[] inside platforms with ARCH_HAS_VM_PAGE_PROT (current patch 1) >> 2. Convert remaining platforms to use ARCH_HAS_VM_PAGE_PROT one after the other >> 3. Drop ARCH_HAS_VM_PAGE_PROT completely >> >> Using "#ifdef __P000" for wrapping protection_map[] will leave two different #ifdefs >> in flight i.e __P000, ARCH_HAS_VM_PAGE_PROT in the generic mmap code, until both gets >> dropped eventually. But using "#ifdef __P000" does enable splitting the first patch >> into multiple changes for each individual platforms. > > From previous discussions and based on Christoph's suggestion, I guess > we now aim at getting vm_get_page_prot() moved into all platforms > together with protection_map[]. Therefore the use of #ifdef __P000 could > be very temporary at the begining of the series: > 1. Guard generic protection_map[] with #ifdef ___P000 > 2. Move protection_map[] into architecture and drop __Pxxx/__Sxxx for arm64 > 3. Same for sparc > 4. Same for x86 > 5. Convert entire powerpc to ARCH_HAS_VM_PAGE_PROT and move > protection_map[] into architecture and drop __Pxxx/__Sxxx > 6. Replace #ifdef __P000 by #ifdef CONFIG_ARCH_HAS_VM_PAGE_PROT > 7. Convert all remaining platforms to CONFIG_ARCH_HAS_VM_PAGE_PROT one > by one (but keep a protection_map[] table, don't use switch/case) > 8. Drop ARCH_HAS_VM_PAGE_PROT completely. > > Eventually you can squash step 6 into step 8. Keeping individual platform changes in a separate patch will make the series cleaner, and also much easier to review. But the flow explained above sounds good to me. I will work on these changes.