From: Huaisheng HS1 Ye <yehs1@lenovo.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Jeff Moyer <jmoyer@redhat.com>,
Dan Williams <dan.j.williams@intel.com>,
Michal Hocko <mhocko@suse.com>,
linux-nvdimm <linux-nvdimm@lists.01.org>,
Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>,
NingTing Cheng <chengnt@lenovo.com>,
Dave Hansen <dave.hansen@intel.com>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
"pasha.tatashin@oracle.com" <pasha.tatashin@oracle.com>,
Linux MM <linux-mm@kvack.org>, "colyli@suse.de" <colyli@suse.de>,
Johannes Weiner <hannes@cmpxchg.org>,
Andrew Morton <akpm@linux-foundation.org>,
Sasha Levin <alexander.levin@verizon.com>,
Mel Gorman <mgorman@techsingularity.net>,
Vlastimil Babka <vbabka@suse.cz>, Ocean HY1 He <hehy1@lenovo.com>
Subject: RE: [External] Re: [RFC PATCH v1 0/6] use mm to manage NVDIMM (pmem) zone
Date: Wed, 16 May 2018 02:05:05 +0000 [thread overview]
Message-ID: <HK2PR03MB1684F8D2724BB8AF1FCCF02A92920@HK2PR03MB1684.apcprd03.prod.outlook.com> (raw)
In-Reply-To: <20180515162003.GA26489@bombadil.infradead.org>
> From: Matthew Wilcox [mailto:willy@infradead.org]
> Sent: Wednesday, May 16, 2018 12:20 AM>
> > > > > Then there's the problem of reconnecting the page cache (which is
> > > > > pointed to by ephemeral data structures like inodes and dentries) to
> > > > > the new inodes.
> > > > Yes, it is not easy.
> > >
> > > Right ... and until we have that ability, there's no point in this patch.
> > We are focusing to realize this ability.
>
> But is it the right approach? So far we have (I think) two parallel
> activities. The first is for local storage, using DAX to store files
> directly on the pmem. The second is a physical block cache for network
> filesystems (both NAS and SAN). You seem to be wanting to supplant the
> second effort, but I think it's much harder to reconnect the logical cache
> (ie the page cache) than it is the physical cache (ie the block cache).
Dear Matthew,
Thanks for correcting my idea with cache line.
But I have questions about that, assuming NVDIMM works with pmem mode, even we
used it as physical block cache, like dm-cache, there is potential risk with
this cache line issue, because NVDIMMs are bytes-address storage, right?
If system crash happens, that means CPU doesn't have opportunity to flush all dirty
data from cache lines to NVDIMM, during copying data pointed by bio_vec.bv_page to
NVDIMM.
I know there is btt which is used to guarantee sector atomic with block mode,
but for pmem mode that will likely cause mix of new and old data in one page
of NVDIMM.
Correct me if anything wrong.
Another question, if we used NVDIMMs as physical block cache for network filesystems,
Does industry have existing implementation to bypass Page Cache similarly like DAX way,
that is to say, directly storing data to NVDIMMs from userspace, rather than copying
data from kernel space memory to NVDIMMs.
BRs,
Huaisheng
next prev parent reply other threads:[~2018-05-16 2:05 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-05-07 14:50 Huaisheng Ye
2018-05-07 14:50 ` [RFC PATCH v1 4/6] arch/x86/kernel: mark NVDIMM regions from e820_table Huaisheng Ye
2018-05-07 18:46 ` [RFC PATCH v1 0/6] use mm to manage NVDIMM (pmem) zone Matthew Wilcox
2018-05-07 18:57 ` Dan Williams
2018-05-07 19:08 ` Jeff Moyer
2018-05-07 19:17 ` Dan Williams
2018-05-07 19:28 ` Jeff Moyer
2018-05-07 19:29 ` Dan Williams
2018-05-08 2:59 ` [External] " Huaisheng HS1 Ye
2018-05-08 3:09 ` Matthew Wilcox
2018-05-09 4:47 ` Huaisheng HS1 Ye
2018-05-10 16:27 ` Matthew Wilcox
2018-05-15 16:07 ` Huaisheng HS1 Ye
2018-05-15 16:20 ` Matthew Wilcox
2018-05-16 2:05 ` Huaisheng HS1 Ye [this message]
2018-05-16 2:48 ` Dan Williams
2018-05-16 8:33 ` Huaisheng HS1 Ye
2018-05-16 2:52 ` Matthew Wilcox
2018-05-16 4:10 ` Dan Williams
2018-05-08 3:52 ` Dan Williams
2018-05-07 19:18 ` Matthew Wilcox
2018-05-07 19:30 ` Dan Williams
2018-05-08 0:54 ` [External] " Huaisheng HS1 Ye
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=HK2PR03MB1684F8D2724BB8AF1FCCF02A92920@HK2PR03MB1684.apcprd03.prod.outlook.com \
--to=yehs1@lenovo.com \
--cc=akpm@linux-foundation.org \
--cc=alexander.levin@verizon.com \
--cc=chengnt@lenovo.com \
--cc=colyli@suse.de \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=hannes@cmpxchg.org \
--cc=hehy1@lenovo.com \
--cc=jmoyer@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvdimm@lists.01.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=pasha.tatashin@oracle.com \
--cc=penguin-kernel@i-love.sakura.ne.jp \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox