linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Adam Manzanares <Adam.Manzanares@wdc.com>
To: "lsf-pc@lists.linux-foundation.org" <lsf-pc@lists.linux-foundation.org>
Cc: "linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>
Subject: [LSF/MM TOPIC] User Directed Tiered Memory Management
Date: Wed, 24 Jan 2018 16:22:26 +0000	[thread overview]
Message-ID: <cae10844-35cd-991c-c69d-545e774d5a50@wdc.com> (raw)

With the introduction of byte addressable storage devices that have low 
latencies, it becomes difficult to decide how to expose these devices to 
user space applications. Do we treat them as traditional block devices 
or expose them as a DAX capable device? A traditional block device 
allows us to use the page cache to take advantage of locality in access 
patterns, but comes at the expense of extra memory copies that are 
extremely costly for random workloads. A DAX capable device seems great 
for the aforementioned random access workload, but suffers once there is 
some locality in the access pattern.

When DAX-capable devices are used as slower/cheaper volatile memory, 
treating them as a slower NUMA node with an associated NUMA migration 
policy would allow for taking advantage of access pattern locality. 
However this approach suffers from a few drawbacks. First, when those 
devices are also persistent, the tiering approach used in NUMA migration 
may not guarantee persistence. Secondly, for devices with significantly 
higher latencies than DRAM, the cost of moving clean pages may be 
significant. Finally, pages handled via NUMA migration are a common 
resource subject to thrashing in case of memory pressure.

I would like to discuss an alternative approach where memory intensive 
applications mmap these storage devices into their address space. The 
application can specify how much DRAM could be used as a cache and have 
some influence on prefetching and eviction policies. The goal of such an 
approach would be to minimize the impact of the slightly slower memory 
could potentially have on a system when it is treated as kernel managed 
global resource, as well as enable use of those devices as persistent 
memory. BTW we criminally ;) used the vm_insert_page function in a 
prototype and have found that it is faster to use vs page cache and 
swapping mechanisms limited to use a small amount of DRAM.

                 reply	other threads:[~2018-01-24 16:22 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cae10844-35cd-991c-c69d-545e774d5a50@wdc.com \
    --to=adam.manzanares@wdc.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox