From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71D0EC433F5 for ; Thu, 19 May 2022 07:38:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EAB9B6B0075; Thu, 19 May 2022 03:38:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E5B376B0078; Thu, 19 May 2022 03:38:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D22B58D0001; Thu, 19 May 2022 03:38:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id BF9F56B0075 for ; Thu, 19 May 2022 03:38:30 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 8E56CA06 for ; Thu, 19 May 2022 07:38:30 +0000 (UTC) X-FDA: 79481689980.05.6809653 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf30.hostedemail.com (Postfix) with ESMTP id 6AD1180008 for ; Thu, 19 May 2022 07:38:04 +0000 (UTC) Received: by mail-pl1-f170.google.com with SMTP id s14so4051246plk.8 for ; Thu, 19 May 2022 00:38:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=tSebtDt8YkoL4i4+zWk/Qij2Mxud3PKOTk3PTgFzbmE=; b=bBknZxRTg3Z97Z2ssMST7CUdiIFQdw30VIGEbDc4XILjLtZJjKy9Pv6V+eilqDLQ/V t7lH4/LygHCJgB9DUfDdPJanANCPEXKVSOVb1Hfy8AmKiAj8D6YLV/gEHNTVfvZnnyhs S5RDWRs9XbqZG15WH2zx2UWqfQsCRUc86TWLP2LOVLw6D6kVHRh+YW0B9hLibMB+QoDr 5O2/cBeRzhsGX6H0DRMg4Sdc2C3eBXvqfsGpE1L3goVSUp66aVUsi2qKrD+CXWKkYXGT X4sWxF0H45W3D2hf3QuCZYT8cl1cIoxje8ULU6xC+l2XSHSn7ya1RMSpV5rK7xCouCvZ 8Hug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=tSebtDt8YkoL4i4+zWk/Qij2Mxud3PKOTk3PTgFzbmE=; b=iSUJGM1UdMiWlhV/2jgsZ53aOMcUnuEgIFhr3bEx3eNwapNLBSjVHApwmsQINVzPGa mVK5lIcCJCPUHkiyv54XqXMX+Z69A3wX+F7j5JLNAsv92FYD/dwzmvBnOVklAlGkQsoF M5o7J78RvRyC0fV+6x+FEp0WiZAHsLZhFinglTYpWk0BIzMhzCzOYivc5GhSxiXN7daN 98CNzOUqPB27P6CA9/HhygP7xIsJ+qGF6P8bexzKt3VSD/C5/v9p43KOur53f1sKN9Pf GHt/JElbiLtUwcC71PFpCBIv9bQ9Up+JcnyUBhJoGKo+FqI2Fe0xL2dbJ0IHS+bcoJet Cu2Q== X-Gm-Message-State: AOAM5339VjMZpQMgHuxKpVO7VxjFiPLwPwTI2c6rnJDmTRpun3cIov+T at9saLev1oNA1/07lUWRy2pWOQ== X-Google-Smtp-Source: ABdhPJxOt6vmYLsghe1ev3gFOkYdn1saAkpRvucrQZs0VuDOdd5NjCrqpCuiayGMwEFcrgN4u4BeyA== X-Received: by 2002:a17:90a:17a9:b0:1df:4826:5155 with SMTP id q38-20020a17090a17a900b001df48265155mr4367272pja.201.1652945907942; Thu, 19 May 2022 00:38:27 -0700 (PDT) Received: from localhost ([139.177.225.250]) by smtp.gmail.com with ESMTPSA id c22-20020a170902c2d600b0015e8d4eb2easm2988165pla.308.2022.05.19.00.38.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 May 2022 00:38:27 -0700 (PDT) Date: Thu, 19 May 2022 15:38:24 +0800 From: Muchun Song To: Yang Shi Cc: Matthew Wilcox , Andrew Morton , Linux MM , Linux Kernel Mailing List Subject: Re: [v2 PATCH] mm: pvmw: check possible huge PMD map by transhuge_vma_suitable() Message-ID: References: <20220513191705.457775-1-shy828301@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 6AD1180008 X-Stat-Signature: rm8rubegufgkuji3z6e7wskxh1p7ojt5 X-Rspam-User: Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=bBknZxRT; spf=pass (imf30.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-HE-Tag: 1652945884-19900 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, May 18, 2022 at 11:45:14AM -0700, Yang Shi wrote: > On Tue, May 17, 2022 at 10:31 PM Muchun Song wrote: > > > > On Fri, May 13, 2022 at 12:17:05PM -0700, Yang Shi wrote: > > > IIUC PVMW checks if the vma is possibly huge PMD mapped by > > > transparent_hugepage_active() and "pvmw->nr_pages >= HPAGE_PMD_NR". > > > > > > Actually pvmw->nr_pages is returned by compound_nr() or > > > folio_nr_pages(), so the page should be THP as long as "pvmw->nr_pages > > > >= HPAGE_PMD_NR". And it is guaranteed THP is allocated for valid VMA > > > in the first place. But it may be not PMD mapped if the VMA is file > > > VMA and it is not properly aligned. The transhuge_vma_suitable() > > > is used to do such check, so replace transparent_hugepage_active() to > > > it, which is too heavy and overkilling. > > > > > > Cc: Matthew Wilcox (Oracle) > > > Cc: Muchun Song > > > Signed-off-by: Yang Shi > > > --- > > > v2: * Fixed build error for !CONFIG_TRANSPARENT_HUGEPAGE > > > * Removed fixes tag per Willy > > > > > > include/linux/huge_mm.h | 8 ++++++-- > > > mm/page_vma_mapped.c | 2 +- > > > 2 files changed, 7 insertions(+), 3 deletions(-) > > > > > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > > > index fbf36bb1be22..c2826b1f4069 100644 > > > --- a/include/linux/huge_mm.h > > > +++ b/include/linux/huge_mm.h > > > @@ -117,8 +117,10 @@ extern struct kobj_attribute shmem_enabled_attr; > > > extern unsigned long transparent_hugepage_flags; > > > > > > static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, > > > - unsigned long haddr) > > > + unsigned long addr) > > > { > > > + unsigned long haddr; > > > + > > > /* Don't have to check pgoff for anonymous vma */ > > > if (!vma_is_anonymous(vma)) { > > > if (!IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff, > > > @@ -126,6 +128,8 @@ static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, > > > return false; > > > } > > > > > > + haddr = addr & HPAGE_PMD_MASK; > > > + > > > if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end) > > > return false; > > > return true; > > > @@ -328,7 +332,7 @@ static inline bool transparent_hugepage_active(struct vm_area_struct *vma) > > > } > > > > > > static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, > > > - unsigned long haddr) > > > + unsigned long addr) > > > { > > > return false; > > > } > > > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > > > index c10f839fc410..e971a467fcdf 100644 > > > --- a/mm/page_vma_mapped.c > > > +++ b/mm/page_vma_mapped.c > > > @@ -243,7 +243,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > > > * cleared *pmd but not decremented compound_mapcount(). > > > */ > > > if ((pvmw->flags & PVMW_SYNC) && > > > - transparent_hugepage_active(vma) && > > > + transhuge_vma_suitable(vma, pvmw->address) && > > > > How about the following diff? Then we do not need to change > > transhuge_vma_suitable(). All the users of transhuge_vma_suitable() > > are already do the alignment by themselves. > > Thanks for the suggestion. But TBH I don't think this is a better way. > I did think about this before proposing v2, but I don't prefer to > pollute the code with IS_ENABLED(CONFIG_xxx) since the definition of > transhuge_vma_suitable() is already protected by #ifdef. Rounding the > address in transhuge_vma_suitable() seems neater and more readable to > me IMHO. > > Some callers of transhuge_vma_suitable() do round the address before > calling it, but the rounded address is used by other codes in the > callers too. > All right. Reviewed-by: Muchun Song Thanks.