From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25EFBC3DA6E for ; Wed, 20 Dec 2023 13:38:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 98B806B0081; Wed, 20 Dec 2023 08:38:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 93B816B0082; Wed, 20 Dec 2023 08:38:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8040C6B0083; Wed, 20 Dec 2023 08:38:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 716976B0081 for ; Wed, 20 Dec 2023 08:38:09 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 4D82FA0C14 for ; Wed, 20 Dec 2023 13:38:09 +0000 (UTC) X-FDA: 81587300298.19.085CFEB Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf16.hostedemail.com (Postfix) with ESMTP id A6077180012 for ; Wed, 20 Dec 2023 13:38:05 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of sunnanyong@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=sunnanyong@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1703079486; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mvkUW80o7/jgC9wxhIGT2lcErPBspA93sVm8dwnTv08=; b=cZ4cUlatZo/OE38AwfYDn8ZCB3HW5bIMizbyv66Vo4RUxF64NU2mukbc7MIeV9dephLfsz E/BP1al8rucP2I3N8koxyX/du3EsJ9co3B60EaaK4GT21gBCsG69bzzv/VD0qSVPhqX/Y4 9vynrvSMUH8IhvLoD0heF8hIr2N2ZqU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1703079486; a=rsa-sha256; cv=none; b=SjCLsDSrwE/84OVCPsvD8qj9VRjHrABzkML57O0lInPI9URo+yMkuicrriP9iiAPZwcld/ NCgyr99gWNnBWl/5utbZvgnJveXkCYqGM2UBZS82En7MAUQnMB31ajt3Zhcsd2ylryBEHF PWSvmRIDmhWS3HYX4+6OfHbygF3LZtM= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of sunnanyong@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=sunnanyong@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4SwF022SzBz1FFG9; Wed, 20 Dec 2023 21:34:10 +0800 (CST) Received: from kwepemm000003.china.huawei.com (unknown [7.193.23.66]) by mail.maildlp.com (Postfix) with ESMTPS id 342EB1A0172; Wed, 20 Dec 2023 21:37:54 +0800 (CST) Received: from [10.174.179.79] (10.174.179.79) by kwepemm000003.china.huawei.com (7.193.23.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 20 Dec 2023 21:37:53 +0800 Subject: Re: [PATCH v2 2/3] arm64: mm: HVO: support BBM of vmemmap pgtable safely To: Muchun Song CC: Catalin Marinas , Will Deacon , Mike Kravetz , Andrew Morton , Anshuman Khandual , "Matthew Wilcox (Oracle)" , Kefeng Wang , , LKML , Linux-MM References: <20231220051855.47547-1-sunnanyong@huawei.com> <20231220051855.47547-3-sunnanyong@huawei.com> <08DCC8BB-631C-4F7A-BB0A-494AD2AD3465@linux.dev> From: Nanyong Sun Message-ID: <8e3b03bc-af43-adaf-5980-82548893a7c5@huawei.com> Date: Wed, 20 Dec 2023 21:37:52 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 MIME-Version: 1.0 In-Reply-To: <08DCC8BB-631C-4F7A-BB0A-494AD2AD3465@linux.dev> Content-Type: text/plain; charset="gbk"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.179.79] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm000003.china.huawei.com (7.193.23.66) X-Stat-Signature: j8ydkruwmxcbod55dypjo8zufbth87ic X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: A6077180012 X-Rspam-User: X-HE-Tag: 1703079485-377572 X-HE-Meta: U2FsdGVkX190PsDy7D+q9cbFUIxX8xMR7DLIRTucWFbOSbWYhFR1SdTKtP3gTV6o3/YIRFSQNNw6phH1+y/kBBmYmvQRabu2genYjO5mDQ3QihHlzH4fbn0NhKZa83KZ2M4OPbE458OIykmzfF/+iOi0/cw7nNpXU7cRcLoPfOhxGH45wkzkOE3LaKJDCowdqWEZLvxu9HNaTtddUg8tnBvi4RTPXqXMJz5G+ycZuLm7IAaCgZPmwJbqHDCaGiJOkmJNwIvZPcdfec/b3Mwj7aPnkEfY6X4tAi8W9JmU3f4aXW/km6I8jbXVgiIbbctv1SHw5QBsPqOM1mU5K6EAFCRPR3vZYWDjGJNKM4x5c+5hVfgCGiPRquZRHPComrqsDaz+zZGAWqdiNcqgtYmW+4Odu5PqTkHFla9UcStRIW/AosuvtT+fbqxptOZcCpCWZpfFenRl2nsob6wUdYpVJpd3dAqWc1QJsgxV+vXDECZkL4HznjFyqKyeUy7zwaqOrI6Zw9Gedo+NZZ9gmH7QwRT6OylI8awAfLgpFPv+qsLvboF5d6IheyygYgdm6lUwMUjrWIu8w5GKHbiixNPzMRIqH1y8Ik6Mx+YsT4xpaWxMaq6XZAWddO/KoZrN1UgoASiSEQMrLbvFaaVeSw73/BxWgpUwy0qBeC0xcXloOZ0PAiyCNjEEL7MmCrPYcBZlZJnHGCwaJNZM7hpWdRuQsya1XDlGrZtCusNL94mPO3kidvXN26geRIVzTIuayjHD2Vu3Ep6Vd6PKnU38FkDKSx0z+agkCiadrkhQ0/zPyqIA561zVJaHvql4qv9Xe9hAvcSqfnz3pJm/5gzrTXLkXixDM+yV/uZy0L3z+FQQ3d+vCo0BS9laB3OK+kSQJt/Z6BbqCMNrU+OXpbo41U3lP2u66ZQrUxYkLp/8vVJ9Qv2dJSXeck6Rn8vh/CC4mQ6lRv8HuJvKwpS15kf77fZ Quu4NpLo PSPmYXjRkez8A3c4zY9zGlclM4j+rOrb+Z4Hq+AmzgbC19ebzncIVL0KPXdR4s0SHpLFTlGip2DdRfggnyDM0DTf/jXMsTyb8P4Ds9wdcZZTc/l37i4zyaT+n9EE53Qt9UVPfY+OHsoZNculgQqNVM1KY8HE1ZNyIcXdM83VIEBaU5b5tgr4o7uWY6WaZ81IWuQgc X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2023/12/20 14:32, Muchun Song wrote: > >> On Dec 20, 2023, at 13:18, Nanyong Sun wrote: >> >> Implement vmemmap_update_pmd and vmemmap_update_pte on arm64 to do >> BBM(break-before-make) logic when change the page table of vmemmap >> address, they will under the init_mm.page_table_lock. >> If a translation fault of vmemmap address concurrently happened after >> pte/pmd cleared, vmemmap page fault handler will acquire the >> init_mm.page_table_lock to wait for vmemmap update to complete, >> by then the virtual address is valid again, so PF can return and >> access can continue. >> In other case, do the traditional kernel fault. >> >> Implement vmemmap_flush_tlb_all/range on arm64 with nothing >> to do because tlb already flushed in every single BBM. >> >> Signed-off-by: Nanyong Sun >> --- >> arch/arm64/include/asm/esr.h | 4 ++ >> arch/arm64/include/asm/mmu.h | 20 +++++++++ >> arch/arm64/mm/fault.c | 78 ++++++++++++++++++++++++++++++++++-- >> arch/arm64/mm/mmu.c | 28 +++++++++++++ >> 4 files changed, 127 insertions(+), 3 deletions(-) >> >> diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h >> index ae35939f395b..1c63256efd25 100644 >> --- a/arch/arm64/include/asm/esr.h >> +++ b/arch/arm64/include/asm/esr.h >> @@ -116,6 +116,10 @@ >> #define ESR_ELx_FSC_SERROR (0x11) >> #define ESR_ELx_FSC_ACCESS (0x08) >> #define ESR_ELx_FSC_FAULT (0x04) >> +#define ESR_ELx_FSC_FAULT_L0 (0x04) >> +#define ESR_ELx_FSC_FAULT_L1 (0x05) >> +#define ESR_ELx_FSC_FAULT_L2 (0x06) >> +#define ESR_ELx_FSC_FAULT_L3 (0x07) >> #define ESR_ELx_FSC_PERM (0x0C) >> #define ESR_ELx_FSC_SEA_TTW0 (0x14) >> #define ESR_ELx_FSC_SEA_TTW1 (0x15) >> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h >> index 2fcf51231d6e..b553bc37c925 100644 >> --- a/arch/arm64/include/asm/mmu.h >> +++ b/arch/arm64/include/asm/mmu.h >> @@ -76,5 +76,25 @@ extern bool kaslr_requires_kpti(void); >> #define INIT_MM_CONTEXT(name) \ >> .pgd = init_pg_dir, >> >> +#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP >> +void vmemmap_update_pmd(unsigned long addr, pmd_t *pmdp, pte_t *ptep); >> +#define vmemmap_update_pmd vmemmap_update_pmd >> +void vmemmap_update_pte(unsigned long addr, pte_t *ptep, pte_t pte); >> +#define vmemmap_update_pte vmemmap_update_pte >> + >> +static inline void vmemmap_flush_tlb_all(void) >> +{ >> + /* do nothing, already flushed tlb in every single BBM */ >> +} >> +#define vmemmap_flush_tlb_all vmemmap_flush_tlb_all >> + >> +static inline void vmemmap_flush_tlb_range(unsigned long start, >> + unsigned long end) >> +{ >> + /* do nothing, already flushed tlb in every single BBM */ >> +} >> +#define vmemmap_flush_tlb_range vmemmap_flush_tlb_range >> +#endif > I think those declaration related to TLB flushing should be moved > to arch/arm64/include/asm/tlbflush.h since we do not include > explicitly in hugetlb_vmemmap.c and its functionality > is to flush TLB. Luckily, is included by hugetlb_vmemmap.c. > > Additionally, vmemmap_update_pmd/pte helpers should be moved to > arch/arm64/include/asm/pgtable.h since it is really pgtable related > operations. > > Thanks. Yes£¬I will move them in next version. Thanks for your time. > > > > > .