From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7916C35274 for ; Mon, 18 Dec 2023 09:59:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5D3028D0007; Mon, 18 Dec 2023 04:59:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 57ED68D0001; Mon, 18 Dec 2023 04:59:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4468B8D0007; Mon, 18 Dec 2023 04:59:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 30F228D0001 for ; Mon, 18 Dec 2023 04:59:03 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id F122E40161 for ; Mon, 18 Dec 2023 09:59:02 +0000 (UTC) X-FDA: 81579490524.10.5B0B70F Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf22.hostedemail.com (Postfix) with ESMTP id 3BDA4C0006 for ; Mon, 18 Dec 2023 09:58:59 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf22.hostedemail.com: domain of sunnanyong@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=sunnanyong@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702893540; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8T2hzw9gjqnSqfRykLlNRGecEpQPiJH0qcCQtUsHNmk=; b=OnrCxzeOhBME8fVXSGA+ZF2LlLW5gSpidk6w/A8gPzfmBE0tHUQlqqRfUdvYVm+lRcCMtr sRhhWf2nBXNlWl3YP6SLVeiiP9h4ikOZy+Yiq8JbNm6tgqoAJba3lDFWavVNkONi5q+AL8 fZfkpZng78P2TBPctcNZFfDc8W9KP3M= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf22.hostedemail.com: domain of sunnanyong@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=sunnanyong@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702893540; a=rsa-sha256; cv=none; b=aDft/iI6mbdBoLQxYV/RIYk1vjgzv2/y+9LJVWadh/i8QTCsMXgMVL6cGfcanNj4mQRjvv 14UNFpHwuyvNkM5ylBit6pPa+vU33VEpGeCBlgD4ezzTtIGBGSjvQXrxm+kCw+gQBXM/q+ cjDaQpAJnky/TCEMA/JXSbADz2T4FSQ= Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4StwBh3JSKzZdb2; Mon, 18 Dec 2023 17:53:48 +0800 (CST) Received: from kwepemm000003.china.huawei.com (unknown [7.193.23.66]) by mail.maildlp.com (Postfix) with ESMTPS id B4BE9140121; Mon, 18 Dec 2023 17:53:54 +0800 (CST) Received: from [10.174.179.79] (10.174.179.79) by kwepemm000003.china.huawei.com (7.193.23.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Mon, 18 Dec 2023 17:53:53 +0800 Subject: Re: [PATCH 1/3] mm: HVO: introduce helper function to update and flush pgtable To: Muchun Song CC: , , , , , , , , , References: <20231214073912.1938330-1-sunnanyong@huawei.com> <20231214073912.1938330-2-sunnanyong@huawei.com> <0100b6c8-24db-fbcf-d45e-763cfccfa0c5@linux.dev> From: Nanyong Sun Message-ID: <1bc0d2d8-567e-9fc1-39a5-ed498ad1d2d2@huawei.com> Date: Mon, 18 Dec 2023 17:53:53 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 MIME-Version: 1.0 In-Reply-To: <0100b6c8-24db-fbcf-d45e-763cfccfa0c5@linux.dev> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.179.79] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm000003.china.huawei.com (7.193.23.66) X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 3BDA4C0006 X-Stat-Signature: 4fib9jgjnszwqbm9rh8e93cc3wnwqa8z X-HE-Tag: 1702893539-989975 X-HE-Meta: U2FsdGVkX1970931onOx4YYmTkhzVQGA56Or6W1u0jGEnvKOmxJbZTqoO4mD7GlTpNIxpB1WAukEQlj4sdWKY3VLd5LRyKNIckx7oFRWarnOokq4Ae/5vGS6YhrMXHwpYofD+rIFvAcGV5XGRhkRl87JZopmFgAyzAx3Nj0BpwFqFcD82P3sjRqNM6EHEcXQvnsaYav1LOvr/BU+f2y8B6JYks8BWn558kCe2Yq/dwH8B5Wo1nbxdBByubyGVoUZfq4qUAirjVQZyT4xXyuV54Ih8qC/wi9Ssl+Yf/ZLZZea9o6WS70tEAAOUGky/R+GQjgBIAryBmxR6Gc4gXIgjhGSSqaQn2wwBjWzYufwxoipWmTw47Kqppi7vmBdsI6fbsZciftnyaYl7df0xh4BJnq/xMTp+3nzt8crd0NA3tfLhh3TGVuD6Du8IDoF0nEUIo70GIL3C/wmvnMhlTCGhlk9H0wHBEzd14mGAlUktGXU44wUP9rmDbes32yqJT1biNQeuxqBet9udxUm2uriVPCBJkXuWH8qpmtwvT++m6de8eEm0vqTgsxEFmjbvQpoB+lHS1NYaFfoSGKE8GPWnqRqqYXt/Vetx/bguSIhiRgL/cvxUK4X8LYKhfWn9sxqV9rMjN0+7OL5N57dwqtb3djznA4TljM7IO8m19mOlt9Jh4fca1xXAJqmzPWErWvxcurV2HAz8ZgfHNq1d25zYw7U38Gbqf1axHqqA6CzgAnlryTMjjUL0p90VOWHxINGnKkbnNbsnHEVVsa7IHhUUxZ9zK/aTf4m2j68xhrE9hvEJZ+u4o96UtpQlLb3zHtoVMg7fO//g+pR0O992NkwB2DjqFQuK82zx5+54vnz1WT/k68upGenEDneftiUX1Fr/Yx2AJlorsII96fIp7MB9pUscxg+8sgCEpkgTY3nPWSv15NtgsG+iyuL28NGSQCOLRrm43YfsarC693olYR LwFmBFGh B4vZ17Zp3rAylkrDXktJj24v9rDeoKw5JosuDnMZ97wSsHiX2UJtFDozOJT9Sf6YVmFPCbkM7cdrcj5IuVhpmIky7Vx61Dc+t860QQi+jja0UM0TirWsYlh2GSQjuWE2XCeBds4sn8H33RVucnQoIIO38UsBg6QbsshsmMIv/pFbtHBBvvteOHb/Bv+vJYCyyWyD8epnhk8S7yGKCpfUTsufxew== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2023/12/15 11:36, Muchun Song wrote: > > > On 2023/12/14 15:39, Nanyong Sun wrote: >> Add pmd/pte update and tlb flush helper function to update page >> table. This refactoring patch is designed to facilitate each >> architecture to implement its own special logic in preparation >> for the arm64 architecture to follow the necessary break-before-make >> sequence when updating page tables. >> >> Signed-off-by: Nanyong Sun >> --- >>   mm/hugetlb_vmemmap.c | 55 ++++++++++++++++++++++++++++++++++---------- >>   1 file changed, 43 insertions(+), 12 deletions(-) >> >> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c >> index 87818ee7f01d..49e8b351def3 100644 >> --- a/mm/hugetlb_vmemmap.c >> +++ b/mm/hugetlb_vmemmap.c >> @@ -45,6 +45,37 @@ struct vmemmap_remap_walk { >>       unsigned long        flags; >>   }; >>   +#ifndef vmemmap_update_pmd >> +static inline void vmemmap_update_pmd(unsigned long start, >> +                      pmd_t *pmd, pte_t *pgtable) >> +{ >> +    pmd_populate_kernel(&init_mm, pmd, pgtable); >> +} >> +#endif >> + >> +#ifndef vmemmap_update_pte >> +static inline void vmemmap_update_pte(unsigned long addr, >> +                      pte_t *pte, pte_t entry) >> +{ >> +    set_pte_at(&init_mm, addr, pte, entry); >> +} >> +#endif >> + >> +#ifndef flush_tlb_vmemmap_all >> +static inline void flush_tlb_vmemmap_all(void) >> +{ >> +    flush_tlb_all(); >> +} >> +#endif >> + >> +#ifndef flush_tlb_vmemmap_range >> +static inline void flush_tlb_vmemmap_range(unsigned long start, >> +                       unsigned long end) >> +{ >> +    flush_tlb_kernel_range(start, end); >> +} >> +#endif > > I'd like to rename both tlb-flush helpers to vmemmap_flush_tlb_all/range > since other helpers all are prefixed with "vmemmap". It'll be more > consistent for me. > > Otherwise LGTM. Thanks. > > Reviewed-by: Muchun Song Hi Muchun, Thank you for your review on this patch set, I'll fix them and send out the v2 version later. > >> + >>   static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start, >> bool flush) >>   { >>       pmd_t __pmd; >> @@ -87,9 +118,9 @@ static int split_vmemmap_huge_pmd(pmd_t *pmd, >> unsigned long start, bool flush) >>             /* Make pte visible before pmd. See comment in >> pmd_install(). */ >>           smp_wmb(); >> -        pmd_populate_kernel(&init_mm, pmd, pgtable); >> +        vmemmap_update_pmd(start, pmd, pgtable); >>           if (flush) >> -            flush_tlb_kernel_range(start, start + PMD_SIZE); >> +            flush_tlb_vmemmap_range(start, start + PMD_SIZE); >>       } else { >>           pte_free_kernel(&init_mm, pgtable); >>       } >> @@ -217,7 +248,7 @@ static int vmemmap_remap_range(unsigned long >> start, unsigned long end, >>       } while (pgd++, addr = next, addr != end); >>         if (walk->remap_pte && !(walk->flags & >> VMEMMAP_REMAP_NO_TLB_FLUSH)) >> -        flush_tlb_kernel_range(start, end); >> +        flush_tlb_vmemmap_range(start, end); >>         return 0; >>   } >> @@ -263,15 +294,15 @@ static void vmemmap_remap_pte(pte_t *pte, >> unsigned long addr, >>             /* >>            * Makes sure that preceding stores to the page contents from >> -         * vmemmap_remap_free() become visible before the set_pte_at() >> -         * write. >> +         * vmemmap_remap_free() become visible before the >> +         * vmemmap_update_pte() write. >>            */ >>           smp_wmb(); >>       } >>         entry = mk_pte(walk->reuse_page, pgprot); >>       list_add(&page->lru, walk->vmemmap_pages); >> -    set_pte_at(&init_mm, addr, pte, entry); >> +    vmemmap_update_pte(addr, pte, entry); >>   } >>     /* >> @@ -310,10 +341,10 @@ static void vmemmap_restore_pte(pte_t *pte, >> unsigned long addr, >>         /* >>        * Makes sure that preceding stores to the page contents become >> visible >> -     * before the set_pte_at() write. >> +     * before the vmemmap_update_pte() write. >>        */ >>       smp_wmb(); >> -    set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); >> +    vmemmap_update_pte(addr, pte, mk_pte(page, pgprot)); >>   } >>     /** >> @@ -576,7 +607,7 @@ long hugetlb_vmemmap_restore_folios(const struct >> hstate *h, >>       } >>         if (restored) >> -        flush_tlb_all(); >> +        flush_tlb_vmemmap_all(); >>       if (!ret) >>           ret = restored; >>       return ret; >> @@ -744,7 +775,7 @@ void hugetlb_vmemmap_optimize_folios(struct >> hstate *h, struct list_head *folio_l >>               break; >>       } >>   -    flush_tlb_all(); >> +    flush_tlb_vmemmap_all(); >>         list_for_each_entry(folio, folio_list, lru) { >>           int ret = __hugetlb_vmemmap_optimize_folio(h, folio, >> @@ -760,7 +791,7 @@ void hugetlb_vmemmap_optimize_folios(struct >> hstate *h, struct list_head *folio_l >>            * allowing more vmemmap remaps to occur. >>            */ >>           if (ret == -ENOMEM && !list_empty(&vmemmap_pages)) { >> -            flush_tlb_all(); >> +            flush_tlb_vmemmap_all(); >>               free_vmemmap_page_list(&vmemmap_pages); >>               INIT_LIST_HEAD(&vmemmap_pages); >>               __hugetlb_vmemmap_optimize_folio(h, folio, >> @@ -769,7 +800,7 @@ void hugetlb_vmemmap_optimize_folios(struct >> hstate *h, struct list_head *folio_l >>           } >>       } >>   -    flush_tlb_all(); >> +    flush_tlb_vmemmap_all(); >>       free_vmemmap_page_list(&vmemmap_pages); >>   } > > .