From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31ADAC43460 for ; Thu, 29 Apr 2021 13:27:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 73B6C61462 for ; Thu, 29 Apr 2021 13:27:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 73B6C61462 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 661CB6B0072; Thu, 29 Apr 2021 09:27:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 59A986B0070; Thu, 29 Apr 2021 09:27:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28CE26B0074; Thu, 29 Apr 2021 09:27:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0196.hostedemail.com [216.40.44.196]) by kanga.kvack.org (Postfix) with ESMTP id E56896B0070 for ; Thu, 29 Apr 2021 09:27:11 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9E12C181AF5C2 for ; Thu, 29 Apr 2021 13:27:11 +0000 (UTC) X-FDA: 78085480662.02.97DA991 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf22.hostedemail.com (Postfix) with ESMTP id 5587CC0007C9 for ; Thu, 29 Apr 2021 13:27:04 +0000 (UTC) Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FWGRf6xKCzPvcD; Thu, 29 Apr 2021 21:23:58 +0800 (CST) Received: from huawei.com (10.175.104.170) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.498.0; Thu, 29 Apr 2021 21:26:58 +0800 From: Miaohe Lin To: CC: , , , , , , , , , , , , , Subject: [PATCH v2 4/5] mm/huge_memory.c: remove unnecessary tlb_remove_page_size() for huge zero pmd Date: Thu, 29 Apr 2021 21:26:47 +0800 Message-ID: <20210429132648.305447-5-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210429132648.305447-1-linmiaohe@huawei.com> References: <20210429132648.305447-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.175.104.170] X-CFilter-Loop: Reflected X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 5587CC0007C9 X-Stat-Signature: u3xjpkjtsnjykn69ymedhoqehzr91d7d Received-SPF: none (huawei.com>: No applicable sender policy available) receiver=imf22; identity=mailfrom; envelope-from=""; helo=szxga05-in.huawei.com; client-ip=45.249.212.191 X-HE-DKIM-Result: none/none X-HE-Tag: 1619702824-111950 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Commit aa88b68c3b1d ("thp: keep huge zero page pinned until tlb flush") introduced tlb_remove_page() for huge zero page to keep it pinned until flush is complete and prevents the page from being split under us. But huge zero page is kept pinned until all relevant mm_users reach zero sinc= e the commit 6fcb52a56ff6 ("thp: reduce usage of huge zero page's atomic counter"). So tlb_remove_page_size() for huge zero pmd is unnecessary now= . Signed-off-by: Miaohe Lin --- mm/huge_memory.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e24a96de2e37..af30338ac49c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1680,12 +1680,9 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm= _area_struct *vma, if (arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); - if (is_huge_zero_pmd(orig_pmd)) - tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE); } else if (is_huge_zero_pmd(orig_pmd)) { zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); - tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE); } else { struct page *page =3D NULL; int flush_needed =3D 1; --=20 2.23.0