From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84B01C433F5 for ; Thu, 18 Nov 2021 11:40:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 27BE561B7D for ; Thu, 18 Nov 2021 11:40:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 27BE561B7D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 8B88D6B0072; Thu, 18 Nov 2021 06:40:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 867A06B0073; Thu, 18 Nov 2021 06:40:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 72F926B0074; Thu, 18 Nov 2021 06:40:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0086.hostedemail.com [216.40.44.86]) by kanga.kvack.org (Postfix) with ESMTP id 614B16B0072 for ; Thu, 18 Nov 2021 06:40:16 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 10F898957B for ; Thu, 18 Nov 2021 11:40:06 +0000 (UTC) X-FDA: 78821857212.15.4220072 Received: from mail-qt1-f177.google.com (mail-qt1-f177.google.com [209.85.160.177]) by imf27.hostedemail.com (Postfix) with ESMTP id AA505700009B for ; Thu, 18 Nov 2021 11:40:04 +0000 (UTC) Received: by mail-qt1-f177.google.com with SMTP id v22so5653151qtx.8 for ; Thu, 18 Nov 2021 03:40:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=n8epIJdsPw1dMJHHPreVGZcZALNd0jYhjQiZuAwfZnU=; b=EREQsDR3JyljTMhe6Vfrc6YAo4KCDB97sWs5ey2FLoC68zaynOaW1lmu9D42+i99RF zjUkXeAeB4C93/SjwkxVVgzuf2XZ+rMkNmeNXZDAsQq9DCIHPNms7xI2kcXdhPxpjS0R 0edHETeWMND7Iowdk2rJF2RMOqnI4lnkQVGIxbrBQReYX+8ul0IcLwIJSLkjFnsFFGjq LpyJidKObeBfw2IUPoDXgU+WKrDLZ1/n3P3usurBlJ7jez8v1EbXIaSzivUpY5O+Bq+B kqEsbiI3PLRmmPyQMqRn5dcAse3ew2rgweiBrLKPlV9ytsVVdDYkJu71RDQ9W4MgFgEM X9wA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=n8epIJdsPw1dMJHHPreVGZcZALNd0jYhjQiZuAwfZnU=; b=LBhXy/qdq5TST16LP+TRJb3xQndyOctNy79CaFxCMeHoJuqtqrUZxt4TXYNywo1LYC pQZXY44e8AfTnFICSVkYYabHWHicdQvPUjNkeCO8qTYD9BqsP6Ad4KqYgR87YlZ+wQr1 zXZn4IwemqaPcgb/Yf6zDRc7KHjTfZGwciwJg18ByoOYoaWOmL2z9Bp/6O+ceA/7vYZp 4SpCK5J7PflieV0YmI+XGFy2istU0PSONuBsSA4sK8/102DKIYvlbvwWLK2MAXHpFdkc mge0MzegZLQd7BMkDw3lvFe091HqiXv+ZPHSjk5wsr/yC+5ehP/8lpbmo7luvdHqYgQ5 qk9A== X-Gm-Message-State: AOAM531ZO6n22JA+fFTshETYIQ07/fBe8jRY5Zn8gaXMamAGqrCrCNug UFY1P7ynKBB4y7RUpvWrMVQmQ1gwr86URtP4c9M= X-Google-Smtp-Source: ABdhPJxLl1+7kA3xQkD2dbJrdZfFK1yly/ve8DZ4pNCJM9vmXUefkK2Z+LMx7QvvdeYQMwjFVhMXGE/I1yM8N86zdKo= X-Received: by 2002:a05:622a:185:: with SMTP id s5mr24789540qtw.299.1637235605043; Thu, 18 Nov 2021 03:40:05 -0800 (PST) MIME-Version: 1.0 References: <1637223483-2867-1-git-send-email-huangzhaoyang@gmail.com> In-Reply-To: <1637223483-2867-1-git-send-email-huangzhaoyang@gmail.com> From: Zhaoyang Huang Date: Thu, 18 Nov 2021 19:39:43 +0800 Message-ID: Subject: Re: [RFC PATCH] arch: arm64: try to use PTE_CONT when change page attr To: Ard Biesheuvel , Catalin Marinas , Will Deacon , Anshuman Khandual , Andrew Morton , Nicholas Piggin , Mike Rapoport , Pavel Tatashin , Christophe Leroy , Jonathan Marek , Zhaoyang Huang , "open list:MEMORY MANAGEMENT" , LKML Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: ctkshgmxefira1xfj66rtewwdk9d1tuk X-Rspamd-Queue-Id: AA505700009B X-Rspamd-Server: rspam07 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=EREQsDR3; spf=pass (imf27.hostedemail.com: domain of huangzhaoyang@gmail.com designates 209.85.160.177 as permitted sender) smtp.mailfrom=huangzhaoyang@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1637235604-347837 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: forget the criteria for judging the linear address range, so please ignore this patch On Thu, Nov 18, 2021 at 4:18 PM Huangzhaoyang wrote: > > From: Zhaoyang Huang > > kernel will use the min granularity when rodata_full enabled which > make TLB pressure high. Furthermore, there is no PTE_CONT applied. > Try to improve these a little by apply PTE_CONT when change page's > attr. > > Signed-off-by: Zhaoyang Huang > --- > arch/arm64/mm/pageattr.c | 62 ++++++++++++++++++++++++++++++++++++++++++++---- > 1 file changed, 58 insertions(+), 4 deletions(-) > > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c > index a3bacd7..0b6a354 100644 > --- a/arch/arm64/mm/pageattr.c > +++ b/arch/arm64/mm/pageattr.c > @@ -61,8 +61,13 @@ static int change_memory_common(unsigned long addr, int numpages, > unsigned long start = addr; > unsigned long size = PAGE_SIZE * numpages; > unsigned long end = start + size; > + unsigned long cont_pte_start = 0; > + unsigned long cont_pte_end = 0; > + unsigned long cont_pmd_start = 0; > + unsigned long cont_pmd_end = 0; > + pgprot_t orig_set_mask = set_mask; > struct vm_struct *area; > - int i; > + int i = 0; > > if (!PAGE_ALIGNED(addr)) { > start &= PAGE_MASK; > @@ -98,9 +103,58 @@ static int change_memory_common(unsigned long addr, int numpages, > */ > if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY || > pgprot_val(clear_mask) == PTE_RDONLY)) { > - for (i = 0; i < area->nr_pages; i++) { > - __change_memory_common((u64)page_address(area->pages[i]), > - PAGE_SIZE, set_mask, clear_mask); > + cont_pmd_start = (start + ~CONT_PMD_MASK + 1) & CONT_PMD_MASK; > + cont_pmd_end = cont_pmd_start + ~CONT_PMD_MASK + 1; > + cont_pte_start = (start + ~CONT_PTE_MASK + 1) & CONT_PTE_MASK; > + cont_pte_end = cont_pte_start + ~CONT_PTE_MASK + 1; > + > + if (addr <= cont_pmd_start && end > cont_pmd_end) { > + do { > + __change_memory_common((u64)page_address(area->pages[i]), > + PAGE_SIZE, set_mask, clear_mask); > + i++; > + addr++; > + } while(addr < cont_pmd_start); > + do { > + set_mask = __pgprot(pgprot_val(set_mask) | PTE_CONT); > + __change_memory_common((u64)page_address(area->pages[i]), > + PAGE_SIZE, set_mask, clear_mask); > + i++; > + addr++; > + } while(addr < cont_pmd_end); > + set_mask = orig_set_mask; > + do { > + __change_memory_common((u64)page_address(area->pages[i]), > + PAGE_SIZE, set_mask, clear_mask); > + i++; > + addr++; > + } while(addr <= end); > + } else if (addr <= cont_pte_start && end > cont_pte_end) { > + do { > + __change_memory_common((u64)page_address(area->pages[i]), > + PAGE_SIZE, set_mask, clear_mask); > + i++; > + addr++; > + } while(addr < cont_pte_start); > + do { > + set_mask = __pgprot(pgprot_val(set_mask) | PTE_CONT); > + __change_memory_common((u64)page_address(area->pages[i]), > + PAGE_SIZE, set_mask, clear_mask); > + i++; > + addr++; > + } while(addr < cont_pte_end); > + set_mask = orig_set_mask; > + do { > + __change_memory_common((u64)page_address(area->pages[i]), > + PAGE_SIZE, set_mask, clear_mask); > + i++; > + addr++; > + } while(addr <= end); > + } else { > + for (i = 0; i < area->nr_pages; i++) { > + __change_memory_common((u64)page_address(area->pages[i]), > + PAGE_SIZE, set_mask, clear_mask); > + } > } > } > > -- > 1.9.1 >