From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD491C43331 for ; Fri, 8 Nov 2019 00:15:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5CB622067B for ; Fri, 8 Nov 2019 00:15:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5CB622067B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=vx.jp.nec.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 10F286B0006; Thu, 7 Nov 2019 19:15:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 099356B0007; Thu, 7 Nov 2019 19:15:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ECAB86B0008; Thu, 7 Nov 2019 19:15:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0174.hostedemail.com [216.40.44.174]) by kanga.kvack.org (Postfix) with ESMTP id D76BA6B0006 for ; Thu, 7 Nov 2019 19:15:54 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 7186A180AD820 for ; Fri, 8 Nov 2019 00:15:54 +0000 (UTC) X-FDA: 76131192228.23.army47_165fceec16100 X-HE-Tag: army47_165fceec16100 X-Filterd-Recvd-Size: 6595 Received: from tyo161.gate.nec.co.jp (tyo161.gate.nec.co.jp [114.179.232.161]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Fri, 8 Nov 2019 00:15:53 +0000 (UTC) Received: from mailgate01.nec.co.jp ([114.179.233.122]) by tyo161.gate.nec.co.jp (8.15.1/8.15.1) with ESMTPS id xA80Fj5Y001311 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 8 Nov 2019 09:15:45 +0900 Received: from mailsv01.nec.co.jp (mailgate-v.nec.co.jp [10.204.236.94]) by mailgate01.nec.co.jp (8.15.1/8.15.1) with ESMTP id xA80FjKP001121; Fri, 8 Nov 2019 09:15:45 +0900 Received: from mail01b.kamome.nec.co.jp (mail01b.kamome.nec.co.jp [10.25.43.2]) by mailsv01.nec.co.jp (8.15.1/8.15.1) with ESMTP id xA80Etp6014126; Fri, 8 Nov 2019 09:15:45 +0900 Received: from bpxc99gp.gisp.nec.co.jp ([10.38.151.152] [10.38.151.152]) by mail01b.kamome.nec.co.jp with ESMTP id BT-MMP-10172977; Fri, 8 Nov 2019 09:08:14 +0900 Received: from BPXM20GP.gisp.nec.co.jp ([10.38.151.212]) by BPXC24GP.gisp.nec.co.jp ([10.38.151.152]) with mapi id 14.03.0439.000; Fri, 8 Nov 2019 09:08:14 +0900 From: Toshiki Fukasawa To: "linux-mm@kvack.org" , "dan.j.williams@intel.com" CC: "linux-kernel@vger.kernel.org" , "akpm@linux-foundation.org" , "mhocko@kernel.org" , "adobriyan@gmail.com" , "hch@lst.de" , "longman@redhat.com" , "sfr@canb.auug.org.au" , "mst@redhat.com" , "cai@lca.pw" , Naoya Horiguchi , Junichi Nomura Subject: [PATCH 3/3] mm: make pfn walker support ZONE_DEVICE Thread-Topic: [PATCH 3/3] mm: make pfn walker support ZONE_DEVICE Thread-Index: AQHVlcicrT2ngYobmkioLRJ7DsejPw== Date: Fri, 8 Nov 2019 00:08:13 +0000 Message-ID: <20191108000855.25209-4-t-fukasawa@vx.jp.nec.com> References: <20191108000855.25209-1-t-fukasawa@vx.jp.nec.com> In-Reply-To: <20191108000855.25209-1-t-fukasawa@vx.jp.nec.com> Accept-Language: ja-JP, en-US Content-Language: ja-JP X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.34.125.135] Content-Type: text/plain; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-TM-AS-MML: disable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch allows pfn walker to read pages on ZONE_DEVICE. There are the following notes: a) The reserved pages indicated by vmem_altmap->reserve are uninitialized, so it must be skipped to read. b) To get vmem_altmap, we need to use get_dev_pagemap(), but doing it for all pfns is too slow. This patch solves both of them. Since vmem_altmap could reserve only first few pages, we can reduce the number of checks by counting sequential valid pages. Signed-off-by: Toshiki Fukasawa --- fs/proc/page.c | 22 ++++++++++++++++++---- include/linux/memremap.h | 6 ++++++ mm/memremap.c | 29 +++++++++++++++++++++++++++++ 3 files changed, 53 insertions(+), 4 deletions(-) diff --git a/fs/proc/page.c b/fs/proc/page.c index a49b638..b6241ea 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -33,6 +33,7 @@ static ssize_t kpage_common_read(struct file *file, char = __user *buf, struct page *ppage; unsigned long src =3D *ppos; unsigned long pfn; + unsigned long valid_pages =3D 0; ssize_t ret =3D 0; =20 pfn =3D src / KPMSIZE; @@ -41,11 +42,24 @@ static ssize_t kpage_common_read(struct file *file, cha= r __user *buf, return -EINVAL; =20 while (count > 0) { - /* - * TODO: ZONE_DEVICE support requires to identify - * memmaps that were actually initialized. - */ ppage =3D pfn_to_online_page(pfn); + if (!ppage && pfn_zone_device(pfn)) { + /* + * Skip to read first few uninitialized pages on + * ZONE_DEVICE. And count valid pages starting + * with the pfn so that minimize the number of + * calls to nr_valid_pages_zone_device(). + */ + if (!valid_pages) + valid_pages =3D nr_valid_pages_zone_device(pfn); + if (valid_pages) { + ppage =3D pfn_to_page(pfn); + valid_pages--; + } + } else if (valid_pages) { + /* ZONE_DEVICE has been hot removed */ + valid_pages =3D 0; + } =20 if (put_user(read_fn(ppage), out)) { ret =3D -EFAULT; diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 6fefb09..d111ae3 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -123,6 +123,7 @@ static inline struct vmem_altmap *pgmap_altmap(struct d= ev_pagemap *pgmap) } =20 #ifdef CONFIG_ZONE_DEVICE +unsigned long nr_valid_pages_zone_device(unsigned long pfn); void *memremap_pages(struct dev_pagemap *pgmap, int nid); void memunmap_pages(struct dev_pagemap *pgmap); void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap); @@ -133,6 +134,11 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn, unsigned long vmem_altmap_offset(struct vmem_altmap *altmap); void vmem_altmap_free(struct vmem_altmap *altmap, unsigned long nr_pfns); #else +static inline unsigned long nr_valid_pages_zone_device(unsigned long pfn) +{ + return 0; +} + static inline void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) { diff --git a/mm/memremap.c b/mm/memremap.c index 8a97fd4..307c73e 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -73,6 +73,35 @@ static unsigned long pfn_next(unsigned long pfn) return pfn + 1; } =20 +/* + * This returns the number of sequential valid pages starting from @pfn + * on ZONE_DEVICE. The invalid pages reserved by driver is first few + * pages on ZONE_DEVICE. + */ +unsigned long nr_valid_pages_zone_device(unsigned long pfn) +{ + struct dev_pagemap *pgmap; + struct vmem_altmap *altmap; + unsigned long pages; + + pgmap =3D get_dev_pagemap(pfn, NULL); + if (!pgmap) + return 0; + altmap =3D pgmap_altmap(pgmap); + if (altmap && pfn < (altmap->base_pfn + altmap->reserve)) + pages =3D 0; + else + /* + * PHYS_PFN(pgmap->res.end) is end pfn on pgmap + * (not start pfn on next mapping). + */ + pages =3D PHYS_PFN(pgmap->res.end) - pfn + 1; + + put_dev_pagemap(pgmap); + + return pages; +} + #define for_each_device_pfn(pfn, map) \ for (pfn =3D pfn_first(map); pfn < pfn_end(map); pfn =3D pfn_next(pfn)) =20 --=20 1.8.3.1