From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55A0BC433B4 for ; Thu, 15 Apr 2021 22:43:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 88A61610FC for ; Thu, 15 Apr 2021 22:43:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 88A61610FC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0B77D6B0070; Thu, 15 Apr 2021 18:43:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 00B856B0075; Thu, 15 Apr 2021 18:43:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DC68E6B0078; Thu, 15 Apr 2021 18:43:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0059.hostedemail.com [216.40.44.59]) by kanga.kvack.org (Postfix) with ESMTP id BBB136B0073 for ; Thu, 15 Apr 2021 18:43:43 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 7D7DF6110 for ; Thu, 15 Apr 2021 22:43:43 +0000 (UTC) X-FDA: 78036079926.39.22A11E3 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf16.hostedemail.com (Postfix) with ESMTP id 6A92F80192D8 for ; Thu, 15 Apr 2021 22:43:42 +0000 (UTC) Received: by mail-pl1-f174.google.com with SMTP id j7so12926130plx.2 for ; Thu, 15 Apr 2021 15:43:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:in-reply-to:references:date:message-id :mime-version; bh=JY3LY8etXyosEeu5nfWlhvzebGDT/edrkrlKplV2R0Q=; b=LGwG/jXjcqstX7vCLunH2tlSTh6FEg3VqmVYhhwaWMbz1gRpN9Dc+pKMBttZG4F2eH +nk8aBQ2fPJnJhVZ2Nv5ytsKTlagBaosP37szFwCRj8Ib0f+ZQBXZ6lPne4jFGq86Bs7 Yl/AWNZNO+DwuQ/A/1FVG7uXMXFQ0Bg71mqpI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:in-reply-to:references:date :message-id:mime-version; bh=JY3LY8etXyosEeu5nfWlhvzebGDT/edrkrlKplV2R0Q=; b=OYl+oUma0/cXDftwAnSuNU4V1tL8VI/CTa4QqMLLl+E9nlmVNGF9HZnXTPyDQaPgds NE16BHPGnTa9speyrAYoKrI+c1MIV5ln2YWJrb/2Gnn9PlsBUX7myG1ywMVbM/Cdv3mA QBHs3MlT+gLpNRKZSuKI03KDpUiqJUg6DtiD58vqvMWZMGj3q1vHg74nVP9/hECE4nYk gyV88Ht7ctbMT1Z3c7NHnK2L5AmhYIKp6jKDbJIhqudFeMH6xYyMTFQ6VqClNpfPs5Ks nbwUBSNR1KigdZz576DIUJWQVPJW5shlCEwS3VVAwpB/QX6dEMiSB3kUlxY2bJ2jKYZg VfpQ== X-Gm-Message-State: AOAM532T7OuElXeRhObKSC87MEszbFmsXPfTy6Dv3mOeZTbUmZnxICZP 0ySaiV2DyH23UdpZCiA5Sntb4g== X-Google-Smtp-Source: ABdhPJyqfJG8J3NRw4PtXWUZy53AS96UfvtUYllUApPaRqPEmjx1X+9e6E5OLIVUTnR/j6EwfW2Icg== X-Received: by 2002:a17:90a:17a3:: with SMTP id q32mr6358434pja.224.1618526621918; Thu, 15 Apr 2021 15:43:41 -0700 (PDT) Received: from localhost (2001-44b8-111e-5c00-3f8b-a64e-9a27-b872.static.ipv6.internode.on.net. [2001:44b8:111e:5c00:3f8b:a64e:9a27:b872]) by smtp.gmail.com with ESMTPSA id h22sm2980650pfn.55.2021.04.15.15.43.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Apr 2021 15:43:41 -0700 (PDT) From: Daniel Axtens To: Christophe Leroy , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Steven Price , akpm@linux-foundation.org Cc: linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-riscv@lists.infradead.org, x86@kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v1 1/5] mm: pagewalk: Fix walk for hugepage tables In-Reply-To: <733408f48b1ed191f53518123ee6fc6d42287cc6.1618506910.git.christophe.leroy@csgroup.eu> References: <733408f48b1ed191f53518123ee6fc6d42287cc6.1618506910.git.christophe.leroy@csgroup.eu> Date: Fri, 16 Apr 2021 08:43:38 +1000 Message-ID: <877dl3184l.fsf@dja-thinkpad.axtens.net> MIME-Version: 1.0 Content-Type: text/plain X-Stat-Signature: d8wwk6c57ytgas78o9azhqw414bpmxrj X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 6A92F80192D8 Received-SPF: none (axtens.net>: No applicable sender policy available) receiver=imf16; identity=mailfrom; envelope-from=""; helo=mail-pl1-f174.google.com; client-ip=209.85.214.174 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618526622-251484 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Christophe, > Pagewalk ignores hugepd entries and walk down the tables > as if it was traditionnal entries, leading to crazy result. > > Add walk_hugepd_range() and use it to walk hugepage tables. > > Signed-off-by: Christophe Leroy > --- > mm/pagewalk.c | 54 +++++++++++++++++++++++++++++++++++++++++++++------ > 1 file changed, 48 insertions(+), 6 deletions(-) > > diff --git a/mm/pagewalk.c b/mm/pagewalk.c > index e81640d9f177..410a9d8f7572 100644 > --- a/mm/pagewalk.c > +++ b/mm/pagewalk.c > @@ -58,6 +58,32 @@ static int walk_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, > return err; > } > > +static int walk_hugepd_range(hugepd_t *phpd, unsigned long addr, > + unsigned long end, struct mm_walk *walk, int pdshift) > +{ > + int err = 0; > +#ifdef CONFIG_ARCH_HAS_HUGEPD > + const struct mm_walk_ops *ops = walk->ops; > + int shift = hugepd_shift(*phpd); > + int page_size = 1 << shift; > + > + if (addr & (page_size - 1)) > + return 0; > + > + for (;;) { > + pte_t *pte = hugepte_offset(*phpd, addr, pdshift); > + > + err = ops->pte_entry(pte, addr, addr + page_size, walk); > + if (err) > + break; > + if (addr >= end - page_size) > + break; > + addr += page_size; > + } Initially I thought this was a somewhat unintuitive way to structure this loop, but I see it parallels the structure of walk_pte_range_inner, so I think the consistency is worth it. I notice the pte walking code potentially takes some locks: does this code need to do that? arch/powerpc/mm/hugetlbpage.c says that hugepds are protected by the mm->page_table_lock, but I don't think we're taking it in this code. > +#endif > + return err; > +} > + > static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, > struct mm_walk *walk) > { > @@ -108,7 +134,10 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, > goto again; > } > > - err = walk_pte_range(pmd, addr, next, walk); > + if (is_hugepd(__hugepd(pmd_val(*pmd)))) > + err = walk_hugepd_range((hugepd_t *)pmd, addr, next, walk, PMD_SHIFT); > + else > + err = walk_pte_range(pmd, addr, next, walk); > if (err) > break; > } while (pmd++, addr = next, addr != end); > @@ -157,7 +186,10 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, > if (pud_none(*pud)) > goto again; > > - err = walk_pmd_range(pud, addr, next, walk); > + if (is_hugepd(__hugepd(pud_val(*pud)))) > + err = walk_hugepd_range((hugepd_t *)pud, addr, next, walk, PUD_SHIFT); > + else > + err = walk_pmd_range(pud, addr, next, walk); I'm a bit worried you might end up calling into walk_hugepd_range with ops->pte_entry == NULL, and then jumping to 0. static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, struct mm_walk *walk) { ... pud = pud_offset(p4d, addr); do { ... if ((!walk->vma && (pud_leaf(*pud) || !pud_present(*pud))) || walk->action == ACTION_CONTINUE || !(ops->pmd_entry || ops->pte_entry)) <<< THIS CHECK continue; ... if (is_hugepd(__hugepd(pud_val(*pud)))) err = walk_hugepd_range((hugepd_t *)pud, addr, next, walk, PUD_SHIFT); else err = walk_pmd_range(pud, addr, next, walk); if (err) break; } while (pud++, addr = next, addr != end); walk_pud_range will proceed if there is _either_ an ops->pmd_entry _or_ an ops->pte_entry, but walk_hugepd_range will call ops->pte_entry unconditionally. The same issue applies to walk_{p4d,pgd}_range... Kind regards, Daniel