From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31723C28CBC for ; Wed, 6 May 2020 17:59:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D8B1220747 for ; Wed, 6 May 2020 17:59:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="rpHumZ7c" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D8B1220747 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 687FA8E0007; Wed, 6 May 2020 13:59:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 611D08E0003; Wed, 6 May 2020 13:59:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5013B8E0007; Wed, 6 May 2020 13:59:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0188.hostedemail.com [216.40.44.188]) by kanga.kvack.org (Postfix) with ESMTP id 351858E0003 for ; Wed, 6 May 2020 13:59:49 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 0368E2DFC for ; Wed, 6 May 2020 17:59:49 +0000 (UTC) X-FDA: 76787057298.28.frog18_8fdc0b81efa19 X-HE-Tag: frog18_8fdc0b81efa19 X-Filterd-Recvd-Size: 8092 Received: from mail-ej1-f67.google.com (mail-ej1-f67.google.com [209.85.218.67]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Wed, 6 May 2020 17:59:48 +0000 (UTC) Received: by mail-ej1-f67.google.com with SMTP id rh22so2100162ejb.12 for ; Wed, 06 May 2020 10:59:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=YIcdx1T5WyCT7V/lFxKy/W62COVbRr9aZKjJfv7CJIA=; b=rpHumZ7cS+nQcUbR/fJyKXseHDb16n9G8VLhbol5mwMi7qgm8Gb5uux8u5QgBD+dSm 6eaBhBXkNZzSE4tg9mfYthz9+9SxDxAnbOa0Tw0SX0MN7y7r/VcY92z1KBEWD0zdWZ0m NVzi9/dB39u6VyDMYjVBJo51NCzog8qTHnKeEuku5Wqg2H23Sui3dUn9grm1kjD56flo 8dk4vaa8zqiUWThm7e8yGEzutx9Ji0xCtwKJwAPDgmqZPVDHnNYr5dD0A9RtQaBjbn8/ nelduSzIfy9/0er+m8he6vtSwSZg0xHBBUIXyGA+fRTCcrxSg3ckH49ol9cYluK56Chy a+pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=YIcdx1T5WyCT7V/lFxKy/W62COVbRr9aZKjJfv7CJIA=; b=B90Z30dt1akQi9xETK/E7i9Ly6qd5+k2croCg4BfAxDzt5ZGEfVk9nAw47SRwPKm7g FOFJTTsyB3U/7/fQF1FM3Z04IXfAqSGpjtdG3BrvbGAg4wXy8VvfWxsMpXKSxmMjsknv ktOUUQSugSzpoohFPEMq4bz/C1jeW/3ORfy9gqqs9+YdKjFmKvcqkPpuXuxeOd1mHC3D p3/QvjhEcvseWz6ywD4AOHY20YY5A9e1x/t/RFHWpGCBMWG7fO2xl0DRVJzn67WZbSVu ZgfcRK9wEuHsDScVhtTtnf1TRlyLptmaMSEaZFw1St0q4WrO011KcRdpjhn0DfkUaf/y FlhA== X-Gm-Message-State: AGi0PubzADCct0jlxWPKwOeAkoHgprGqHOondZQKNWWSd0vMo3N7PCkj MctSyQfOAUaQmpYyo8/tNaXWzLjaVsIt19jVQYw= X-Google-Smtp-Source: APiQypIacUEFlDEeMFY3srroim9H6s7tFNfCvT0t0IiKQKoTQJ5A0PL5Y3hrRB7OBYcpRWX4iw+gmRW4IbOkcXvx3R4= X-Received: by 2002:a17:906:1e47:: with SMTP id i7mr8765974ejj.61.1588787987110; Wed, 06 May 2020 10:59:47 -0700 (PDT) MIME-Version: 1.0 References: <20200429133657.22632-1-willy@infradead.org> <20200429133657.22632-3-willy@infradead.org> In-Reply-To: <20200429133657.22632-3-willy@infradead.org> From: Yang Shi Date: Wed, 6 May 2020 10:59:23 -0700 Message-ID: Subject: Re: [PATCH v3 02/25] mm: Introduce thp_size To: Matthew Wilcox Cc: Linux FS-devel Mailing List , Linux MM , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Apr 29, 2020 at 6:37 AM Matthew Wilcox wrote: > > From: "Matthew Wilcox (Oracle)" > > This is like page_size(), but compiles down to just PAGE_SIZE if THP > are disabled. Convert the users of hpage_nr_pages() which would prefer > this interface. > > Signed-off-by: Matthew Wilcox (Oracle) > --- > drivers/nvdimm/btt.c | 4 +--- > drivers/nvdimm/pmem.c | 6 ++---- > include/linux/huge_mm.h | 7 +++++++ > mm/internal.h | 2 +- > mm/page_io.c | 2 +- > mm/page_vma_mapped.c | 4 ++-- > 6 files changed, 14 insertions(+), 11 deletions(-) > > diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c > index 3b09419218d6..78e8d972d45a 100644 > --- a/drivers/nvdimm/btt.c > +++ b/drivers/nvdimm/btt.c > @@ -1488,10 +1488,8 @@ static int btt_rw_page(struct block_device *bdev, sector_t sector, > { > struct btt *btt = bdev->bd_disk->private_data; > int rc; > - unsigned int len; > > - len = hpage_nr_pages(page) * PAGE_SIZE; > - rc = btt_do_bvec(btt, NULL, page, len, 0, op, sector); > + rc = btt_do_bvec(btt, NULL, page, thp_size(page), 0, op, sector); > if (rc == 0) > page_endio(page, op_is_write(op), 0); > > diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c > index 2df6994acf83..184c8b516543 100644 > --- a/drivers/nvdimm/pmem.c > +++ b/drivers/nvdimm/pmem.c > @@ -235,11 +235,9 @@ static int pmem_rw_page(struct block_device *bdev, sector_t sector, > blk_status_t rc; > > if (op_is_write(op)) > - rc = pmem_do_write(pmem, page, 0, sector, > - hpage_nr_pages(page) * PAGE_SIZE); > + rc = pmem_do_write(pmem, page, 0, sector, tmp_size(page)); s/tmp_size/thp_size > else > - rc = pmem_do_read(pmem, page, 0, sector, > - hpage_nr_pages(page) * PAGE_SIZE); > + rc = pmem_do_read(pmem, page, 0, sector, thp_size(page)); > /* > * The ->rw_page interface is subtle and tricky. The core > * retries on any error, so we can only invoke page_endio() in > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index 6bec4b5b61e1..e944f9757349 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -271,6 +271,11 @@ static inline int hpage_nr_pages(struct page *page) > return compound_nr(page); > } > > +static inline unsigned long thp_size(struct page *page) > +{ > + return page_size(page); > +} > + > struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, > pmd_t *pmd, int flags, struct dev_pagemap **pgmap); > struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, > @@ -329,6 +334,8 @@ static inline int hpage_nr_pages(struct page *page) > return 1; > } > > +#define thp_size(x) PAGE_SIZE > + > static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) > { > return false; > diff --git a/mm/internal.h b/mm/internal.h > index f762a34b0c57..5efb13d5c226 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -386,7 +386,7 @@ vma_address(struct page *page, struct vm_area_struct *vma) > unsigned long start, end; > > start = __vma_address(page, vma); > - end = start + PAGE_SIZE * (hpage_nr_pages(page) - 1); > + end = start + thp_size(page) - PAGE_SIZE; > > /* page should be within @vma mapping range */ > VM_BUG_ON_VMA(end < vma->vm_start || start >= vma->vm_end, vma); > diff --git a/mm/page_io.c b/mm/page_io.c > index 76965be1d40e..dd935129e3cb 100644 > --- a/mm/page_io.c > +++ b/mm/page_io.c > @@ -41,7 +41,7 @@ static struct bio *get_swap_bio(gfp_t gfp_flags, > bio->bi_iter.bi_sector <<= PAGE_SHIFT - 9; > bio->bi_end_io = end_io; > > - bio_add_page(bio, page, PAGE_SIZE * hpage_nr_pages(page), 0); > + bio_add_page(bio, page, thp_size(page), 0); > } > return bio; > } > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > index 719c35246cfa..e65629c056e8 100644 > --- a/mm/page_vma_mapped.c > +++ b/mm/page_vma_mapped.c > @@ -227,7 +227,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > if (pvmw->address >= pvmw->vma->vm_end || > pvmw->address >= > __vma_address(pvmw->page, pvmw->vma) + > - hpage_nr_pages(pvmw->page) * PAGE_SIZE) > + thp_size(pvmw->page)) > return not_found(pvmw); > /* Did we cross page table boundary? */ > if (pvmw->address % PMD_SIZE == 0) { > @@ -268,7 +268,7 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma) > unsigned long start, end; > > start = __vma_address(page, vma); > - end = start + PAGE_SIZE * (hpage_nr_pages(page) - 1); > + end = start + thp_size(page) - PAGE_SIZE; > > if (unlikely(end < vma->vm_start || start >= vma->vm_end)) > return 0; > -- > 2.26.2 > >