From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EA01C433E0 for ; Wed, 3 Jun 2020 23:00:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1990C208C9 for ; Wed, 3 Jun 2020 23:00:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="A/LYawqx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1990C208C9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C67AC280051; Wed, 3 Jun 2020 19:00:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C0D27280003; Wed, 3 Jun 2020 19:00:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B2241280051; Wed, 3 Jun 2020 19:00:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0089.hostedemail.com [216.40.44.89]) by kanga.kvack.org (Postfix) with ESMTP id 96CA8280003 for ; Wed, 3 Jun 2020 19:00:55 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 676A5180AD806 for ; Wed, 3 Jun 2020 23:00:55 +0000 (UTC) X-FDA: 76889422470.11.toad55_251b23e174431 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 4AE5D180F8B81 for ; Wed, 3 Jun 2020 23:00:55 +0000 (UTC) X-HE-Tag: toad55_251b23e174431 X-Filterd-Recvd-Size: 5891 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 23:00:54 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E5E22208C9; Wed, 3 Jun 2020 23:00:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591225254; bh=4M/rxUExKCTR8B/X1v73D2TNH0mcbx7yCE6FnflKEcM=; h=Date:From:To:Subject:In-Reply-To:From; b=A/LYawqxsSQZ7YpRDmLkRXwJAUOiBj4QLoSJ7WmU/J3iX1uWgsxF+i+JDszVLwBKs wroScEjJzI43q2E1MVOK2N3cLsJzPVl5KQ7x9Vmqmpe23TfbIc0HYLqYTBShlGU5Fq 40Ky1T3US036esfKiOgzH66ytwAjF30CLogKqFh4= Date: Wed, 03 Jun 2020 16:00:53 -0700 From: Andrew Morton To: akpm@linux-foundation.org, jgg@mellanox.com, linux-mm@kvack.org, lixinhai.lxh@gmail.com, longpeng2@huawei.com, mike.kravetz@oracle.com, mm-commits@vger.kernel.org, punit.agrawal@arm.com, torvalds@linux-foundation.org Subject: [patch 074/131] mm/hugetlb: avoid unnecessary check on pud and pmd entry in huge_pte_offset Message-ID: <20200603230053.3WZ4_914g%akpm@linux-foundation.org> In-Reply-To: <20200603155549.e041363450869eaae4c7f05b@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Queue-Id: 4AE5D180F8B81 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Li Xinhai Subject: mm/hugetlb: avoid unnecessary check on pud and pmd entry in huge_pte_offset When huge_pte_offset() is called, the parameter sz can only be PUD_SIZE or PMD_SIZE. If sz is PUD_SIZE and code can reach pud, then *pud must be none, or normal hugetlb entry, or non-present (migration or hwpoisoned) hugetlb entry, and we can directly return pud. When sz is PMD_SIZE, pud must be none or present, and if code can reach pmd, we can directly return pmd. So after this patch the code is simplified by first check on the parameter sz, and avoid unnecessary checks in current code. Same semantics of existing code is maintained. More details about relevant commits: commit 9b19df292c66 ("mm/hugetlb.c: make huge_pte_offset() consistent and document behaviour") changed the code path for pud and pmd handling, see comments about why this patch intends to change it. ... pud = pud_offset(p4d, addr); if (sz != PUD_SIZE && pud_none(*pud)) // [1] return NULL; /* hugepage or swap? */ if (pud_huge(*pud) || !pud_present(*pud)) // [2] return (pte_t *)pud; pmd = pmd_offset(pud, addr); if (sz != PMD_SIZE && pmd_none(*pmd)) // [3] return NULL; /* hugepage or swap? */ if (pmd_huge(*pmd) || !pmd_present(*pmd)) // [4] return (pte_t *)pmd; return NULL; // [5] ... [1]: this is necessary, return NULL for sz == PMD_SIZE; [2]: if sz == PUD_SIZE, all valid values of pud entry will cause return; [3]: dead code, sz != PMD_SIZE never true; [4]: all valid values of pmd entry will cause return; [5]: dead code, because of check in [4]. Now, this patch combines [1] and [2] for pud, and combines [3], [4] and [5] for pmd, so avoid unnecessary checks. I don't try to catch any invalid values in page table entry, as that will be checked by caller and avoid extra branch in this function. Also no assert on sz must equal PUD_SIZE or PMD_SIZE, since this function only call for hugetlb mapping. For commit 3c1d7e6ccb64 ("mm/hugetlb: fix a addressing exception caused by huge_pte_offset"), since we don't read the entry more than once now, variable pud_entry and pmd_entry are not needed. Link: http://lkml.kernel.org/r/1587794313-16849-1-git-send-email-lixinhai.lxh@gmail.com Signed-off-by: Li Xinhai Cc: Mike Kravetz Cc: Jason Gunthorpe Cc: Punit Agrawal Cc: Longpeng Signed-off-by: Andrew Morton --- mm/hugetlb.c | 28 +++++++++++----------------- 1 file changed, 11 insertions(+), 17 deletions(-) --- a/mm/hugetlb.c~mm-hugetlb-avoid-unnecessary-check-on-pud-and-pmd-entry-in-huge_pte_offset +++ a/mm/hugetlb.c @@ -5469,8 +5469,8 @@ pte_t *huge_pte_alloc(struct mm_struct * * huge_pte_offset() - Walk the page table to resolve the hugepage * entry at address @addr * - * Return: Pointer to page table or swap entry (PUD or PMD) for - * address @addr, or NULL if a p*d_none() entry is encountered and the + * Return: Pointer to page table entry (PUD or PMD) for + * address @addr, or NULL if a !p*d_present() entry is encountered and the * size @sz doesn't match the hugepage size at this level of the page * table. */ @@ -5479,8 +5479,8 @@ pte_t *huge_pte_offset(struct mm_struct { pgd_t *pgd; p4d_t *p4d; - pud_t *pud, pud_entry; - pmd_t *pmd, pmd_entry; + pud_t *pud; + pmd_t *pmd; pgd = pgd_offset(mm, addr); if (!pgd_present(*pgd)) @@ -5490,22 +5490,16 @@ pte_t *huge_pte_offset(struct mm_struct return NULL; pud = pud_offset(p4d, addr); - pud_entry = READ_ONCE(*pud); - if (sz != PUD_SIZE && pud_none(pud_entry)) - return NULL; - /* hugepage or swap? */ - if (pud_huge(pud_entry) || !pud_present(pud_entry)) + if (sz == PUD_SIZE) + /* must be pud huge, non-present or none */ return (pte_t *)pud; - - pmd = pmd_offset(pud, addr); - pmd_entry = READ_ONCE(*pmd); - if (sz != PMD_SIZE && pmd_none(pmd_entry)) + if (!pud_present(*pud)) return NULL; - /* hugepage or swap? */ - if (pmd_huge(pmd_entry) || !pmd_present(pmd_entry)) - return (pte_t *)pmd; + /* must have a valid entry and size to go further */ - return NULL; + pmd = pmd_offset(pud, addr); + /* must be pmd huge, non-present or none */ + return (pte_t *)pmd; } #endif /* CONFIG_ARCH_WANT_GENERAL_HUGETLB */ _