From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AF53C433F5 for ; Wed, 18 May 2022 05:31:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EFE106B0072; Wed, 18 May 2022 01:31:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EADC16B0073; Wed, 18 May 2022 01:31:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D75AA6B0074; Wed, 18 May 2022 01:31:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id C79DF6B0072 for ; Wed, 18 May 2022 01:31:35 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id A0EEF6045B for ; Wed, 18 May 2022 05:31:35 +0000 (UTC) X-FDA: 79477741350.16.E941774 Received: from mail-pg1-f177.google.com (mail-pg1-f177.google.com [209.85.215.177]) by imf09.hostedemail.com (Postfix) with ESMTP id 37ECC140060 for ; Wed, 18 May 2022 05:31:24 +0000 (UTC) Received: by mail-pg1-f177.google.com with SMTP id 137so1189197pgb.5 for ; Tue, 17 May 2022 22:31:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=9kEkRsbbLsq2qV20KhrtybMkQdykLU0YNEuC/iZp52Q=; b=g3ld8fRujihoN2eknR81U31G0FS+U73zYUSpmkf3drtwft6xCz09BWP5LHGh21RXHM ZqMqDENux59/5mX5V3SKbj4OgNfOR82SIerS4igN9GirjvkWmFEOPxXZxJaGWj4Abein 4G5MQewmtTkjQyn6q0YPmUrm/jgXpQMfcrB9sshsTB84rkSscWZTQ7FRXErBkfE/5LY7 69qMmHhS81MFeUam+P6BcV/1bsgMlTXNe7d17BaPewtAn4hdKqBi6tYpaI/EQI5RddlD qAery9se8TulsXXoZMHQluAoLgj4AG9Kl5+tQbAUy4NRdDPGs0fn61jYIuKJcnzJQ/fM xKKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=9kEkRsbbLsq2qV20KhrtybMkQdykLU0YNEuC/iZp52Q=; b=EgQMELmYkhXRaXNDZq+xlclPzn5r5q9JRP0i8nSjfLUKfmB/dZljx6J7zLj1HXzjQ6 VvM1G9m5DeAt0IifDhEIwpwTIH4g6lvNt1vX5EKkeJhz1eFDaxBa8JC6ZEOgC38VvyyD UammddKB2o9Y65fkDuS9oGeGdmGinhMz6NCt3OORNMyJQeB3TfEBscripW6akOxr80CE 723aBhgP6KnAfcAYLP0oIOv7ZZxszCADyRvM8yrPwzHtXpFTpjC4jVyDu2XaX+5rQ/Hu XBdZsWcoegivVKiwPC1vlGEbARTS51K/LmnrnccwEg0ERsncCxDLBzWLXJminrD4rYQ4 Hitg== X-Gm-Message-State: AOAM532+Gyelqt4qAUPok35TA6uVkBNCf98oKezSsAb37lU1Y47f7wkT q5e4MguhWBXReKWM3MWgNB28Zg== X-Google-Smtp-Source: ABdhPJyeuvMsiegKpQJ08W3/jMArV0nt/mYOgEbBrg7ZcQmLJahdE2agmiW8yXxuNTu6FNSGK2Nbaw== X-Received: by 2002:a63:2048:0:b0:3db:7de7:34b4 with SMTP id r8-20020a632048000000b003db7de734b4mr22634662pgm.105.1652851893176; Tue, 17 May 2022 22:31:33 -0700 (PDT) Received: from localhost ([139.177.225.234]) by smtp.gmail.com with ESMTPSA id in19-20020a17090b439300b001cb6527ca39sm2666824pjb.0.2022.05.17.22.31.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 May 2022 22:31:32 -0700 (PDT) Date: Wed, 18 May 2022 13:31:29 +0800 From: Muchun Song To: Yang Shi Cc: willy@infradead.org, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [v2 PATCH] mm: pvmw: check possible huge PMD map by transhuge_vma_suitable() Message-ID: References: <20220513191705.457775-1-shy828301@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220513191705.457775-1-shy828301@gmail.com> X-Stat-Signature: jznt3s7cne974xgf5xb4gqehowm8cqm3 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 37ECC140060 X-Rspam-User: Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=g3ld8fRu; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf09.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.177 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1652851884-745986 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, May 13, 2022 at 12:17:05PM -0700, Yang Shi wrote: > IIUC PVMW checks if the vma is possibly huge PMD mapped by > transparent_hugepage_active() and "pvmw->nr_pages >= HPAGE_PMD_NR". > > Actually pvmw->nr_pages is returned by compound_nr() or > folio_nr_pages(), so the page should be THP as long as "pvmw->nr_pages > >= HPAGE_PMD_NR". And it is guaranteed THP is allocated for valid VMA > in the first place. But it may be not PMD mapped if the VMA is file > VMA and it is not properly aligned. The transhuge_vma_suitable() > is used to do such check, so replace transparent_hugepage_active() to > it, which is too heavy and overkilling. > > Cc: Matthew Wilcox (Oracle) > Cc: Muchun Song > Signed-off-by: Yang Shi > --- > v2: * Fixed build error for !CONFIG_TRANSPARENT_HUGEPAGE > * Removed fixes tag per Willy > > include/linux/huge_mm.h | 8 ++++++-- > mm/page_vma_mapped.c | 2 +- > 2 files changed, 7 insertions(+), 3 deletions(-) > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index fbf36bb1be22..c2826b1f4069 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -117,8 +117,10 @@ extern struct kobj_attribute shmem_enabled_attr; > extern unsigned long transparent_hugepage_flags; > > static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, > - unsigned long haddr) > + unsigned long addr) > { > + unsigned long haddr; > + > /* Don't have to check pgoff for anonymous vma */ > if (!vma_is_anonymous(vma)) { > if (!IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff, > @@ -126,6 +128,8 @@ static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, > return false; > } > > + haddr = addr & HPAGE_PMD_MASK; > + > if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end) > return false; > return true; > @@ -328,7 +332,7 @@ static inline bool transparent_hugepage_active(struct vm_area_struct *vma) > } > > static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, > - unsigned long haddr) > + unsigned long addr) > { > return false; > } > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > index c10f839fc410..e971a467fcdf 100644 > --- a/mm/page_vma_mapped.c > +++ b/mm/page_vma_mapped.c > @@ -243,7 +243,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > * cleared *pmd but not decremented compound_mapcount(). > */ > if ((pvmw->flags & PVMW_SYNC) && > - transparent_hugepage_active(vma) && > + transhuge_vma_suitable(vma, pvmw->address) && How about the following diff? Then we do not need to change transhuge_vma_suitable(). All the users of transhuge_vma_suitable() are already do the alignment by themselves. Thanks. diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index c10f839fc410..0aed5ca60c67 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -243,7 +243,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) * cleared *pmd but not decremented compound_mapcount(). */ if ((pvmw->flags & PVMW_SYNC) && - transparent_hugepage_active(vma) && + IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && + transhuge_vma_suitable(vma, pvmw->address & HPAGE_PMD_MASK) && (pvmw->nr_pages >= HPAGE_PMD_NR)) { spinlock_t *ptl = pmd_lock(mm, pvmw->pmd); > (pvmw->nr_pages >= HPAGE_PMD_NR)) { > spinlock_t *ptl = pmd_lock(mm, pvmw->pmd); > > -- > 2.26.3 > >