From: Matthew Wilcox <willy@infradead.org>
To: Huaisheng Ye <yehs1@lenovo.com>
Cc: akpm@linux-foundation.org, linux-mm@kvack.org, mhocko@suse.com,
vbabka@suse.cz, mgorman@techsingularity.net,
pasha.tatashin@oracle.com, alexander.levin@verizon.com,
hannes@cmpxchg.org, penguin-kernel@I-love.SAKURA.ne.jp,
colyli@suse.de, chengnt@lenovo.com, linux-kernel@vger.kernel.org,
linux-nvdimm@lists.01.org
Subject: Re: [RFC PATCH v1 0/6] use mm to manage NVDIMM (pmem) zone
Date: Mon, 7 May 2018 11:46:22 -0700 [thread overview]
Message-ID: <20180507184622.GB12361@bombadil.infradead.org> (raw)
In-Reply-To: <1525704627-30114-1-git-send-email-yehs1@lenovo.com>
On Mon, May 07, 2018 at 10:50:21PM +0800, Huaisheng Ye wrote:
> Traditionally, NVDIMMs are treated by mm(memory management) subsystem as
> DEVICE zone, which is a virtual zone and both its start and end of pfn
> are equal to 0, mm wouldna??t manage NVDIMM directly as DRAM, kernel uses
> corresponding drivers, which locate at \drivers\nvdimm\ and
> \drivers\acpi\nfit and fs, to realize NVDIMM memory alloc and free with
> memory hot plug implementation.
You probably want to let linux-nvdimm know about this patch set.
Adding to the cc. Also, I only received patch 0 and 4. What happened
to 1-3,5 and 6?
> With current kernel, many mma??s classical features like the buddy
> system, swap mechanism and page cache couldna??t be supported to NVDIMM.
> What we are doing is to expand kernel mma??s capacity to make it to handle
> NVDIMM like DRAM. Furthermore we make mm could treat DRAM and NVDIMM
> separately, that means mm can only put the critical pages to NVDIMM
> zone, here we created a new zone type as NVM zone. That is to say for
> traditional(or normal) pages which would be stored at DRAM scope like
> Normal, DMA32 and DMA zones. But for the critical pages, which we hope
> them could be recovered from power fail or system crash, we make them
> to be persistent by storing them to NVM zone.
>
> We installed two NVDIMMs to Lenovo Thinksystem product as development
> platform, which has 125GB storage capacity respectively. With these
> patches below, mm can create NVM zones for NVDIMMs.
>
> Here is dmesg info,
> Initmem setup node 0 [mem 0x0000000000001000-0x000000237fffffff]
> On node 0 totalpages: 36879666
> DMA zone: 64 pages used for memmap
> DMA zone: 23 pages reserved
> DMA zone: 3999 pages, LIFO batch:0
> mminit::memmap_init Initialising map node 0 zone 0 pfns 1 -> 4096
> DMA32 zone: 10935 pages used for memmap
> DMA32 zone: 699795 pages, LIFO batch:31
> mminit::memmap_init Initialising map node 0 zone 1 pfns 4096 -> 1048576
> Normal zone: 53248 pages used for memmap
> Normal zone: 3407872 pages, LIFO batch:31
> mminit::memmap_init Initialising map node 0 zone 2 pfns 1048576 -> 4456448
> NVM zone: 512000 pages used for memmap
> NVM zone: 32768000 pages, LIFO batch:31
> mminit::memmap_init Initialising map node 0 zone 3 pfns 4456448 -> 37224448
> Initmem setup node 1 [mem 0x0000002380000000-0x00000046bfffffff]
> On node 1 totalpages: 36962304
> Normal zone: 65536 pages used for memmap
> Normal zone: 4194304 pages, LIFO batch:31
> mminit::memmap_init Initialising map node 1 zone 2 pfns 37224448 -> 41418752
> NVM zone: 512000 pages used for memmap
> NVM zone: 32768000 pages, LIFO batch:31
> mminit::memmap_init Initialising map node 1 zone 3 pfns 41418752 -> 74186752
>
> This comes /proc/zoneinfo
> Node 0, zone NVM
> pages free 32768000
> min 15244
> low 48012
> high 80780
> spanned 32768000
> present 32768000
> managed 32768000
> protection: (0, 0, 0, 0, 0, 0)
> nr_free_pages 32768000
> Node 1, zone NVM
> pages free 32768000
> min 15244
> low 48012
> high 80780
> spanned 32768000
> present 32768000
> managed 32768000
>
> Huaisheng Ye (6):
> mm/memblock: Expand definition of flags to support NVDIMM
> mm/page_alloc.c: get pfn range with flags of memblock
> mm, zone_type: create ZONE_NVM and fill into GFP_ZONE_TABLE
> arch/x86/kernel: mark NVDIMM regions from e820_table
> mm: get zone spanned pages separately for DRAM and NVDIMM
> arch/x86/mm: create page table mapping for DRAM and NVDIMM both
>
> arch/x86/include/asm/e820/api.h | 3 +++
> arch/x86/kernel/e820.c | 20 +++++++++++++-
> arch/x86/kernel/setup.c | 8 ++++++
> arch/x86/mm/init_64.c | 16 +++++++++++
> include/linux/gfp.h | 57 ++++++++++++++++++++++++++++++++++++---
> include/linux/memblock.h | 19 +++++++++++++
> include/linux/mm.h | 4 +++
> include/linux/mmzone.h | 3 +++
> mm/Kconfig | 16 +++++++++++
> mm/memblock.c | 46 +++++++++++++++++++++++++++----
> mm/nobootmem.c | 5 ++--
> mm/page_alloc.c | 60 ++++++++++++++++++++++++++++++++++++++++-
> 12 files changed, 245 insertions(+), 12 deletions(-)
>
> --
> 1.8.3.1
>
next prev parent reply other threads:[~2018-05-07 18:46 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-05-07 14:50 Huaisheng Ye
2018-05-07 14:50 ` [RFC PATCH v1 4/6] arch/x86/kernel: mark NVDIMM regions from e820_table Huaisheng Ye
2018-05-07 18:46 ` Matthew Wilcox [this message]
2018-05-07 18:57 ` [RFC PATCH v1 0/6] use mm to manage NVDIMM (pmem) zone Dan Williams
2018-05-07 19:08 ` Jeff Moyer
2018-05-07 19:17 ` Dan Williams
2018-05-07 19:28 ` Jeff Moyer
2018-05-07 19:29 ` Dan Williams
2018-05-08 2:59 ` [External] " Huaisheng HS1 Ye
2018-05-08 3:09 ` Matthew Wilcox
2018-05-09 4:47 ` Huaisheng HS1 Ye
2018-05-10 16:27 ` Matthew Wilcox
2018-05-15 16:07 ` Huaisheng HS1 Ye
2018-05-15 16:20 ` Matthew Wilcox
2018-05-16 2:05 ` Huaisheng HS1 Ye
2018-05-16 2:48 ` Dan Williams
2018-05-16 8:33 ` Huaisheng HS1 Ye
2018-05-16 2:52 ` Matthew Wilcox
2018-05-16 4:10 ` Dan Williams
2018-05-08 3:52 ` Dan Williams
2018-05-07 19:18 ` Matthew Wilcox
2018-05-07 19:30 ` Dan Williams
2018-05-08 0:54 ` [External] " Huaisheng HS1 Ye
2018-05-08 2:00 Huaisheng Ye
2018-05-08 2:30 Huaisheng Ye
2018-05-10 7:57 ` Michal Hocko
2018-05-10 8:41 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180507184622.GB12361@bombadil.infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=alexander.levin@verizon.com \
--cc=chengnt@lenovo.com \
--cc=colyli@suse.de \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvdimm@lists.01.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=pasha.tatashin@oracle.com \
--cc=penguin-kernel@I-love.SAKURA.ne.jp \
--cc=vbabka@suse.cz \
--cc=yehs1@lenovo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox