From: "Nakajima, Jun" <jun.nakajima@intel.com>
To: Rik van Riel <riel@redhat.com>
Cc: "lsf-pc@lists.linuxfoundation.org"
<lsf-pc@lists.linuxfoundation.org>,
Linux Memory Management List <linux-mm@kvack.org>,
Linux kernel Mailing List <linux-kernel@vger.kernel.org>,
KVM list <kvm@vger.kernel.org>
Subject: Re: [LSF/MM TOPIC] VM containers
Date: Sat, 23 Jan 2016 23:41:22 +0000 [thread overview]
Message-ID: <439BF796-53D3-48C9-8578-A0733DDE8001@intel.com> (raw)
In-Reply-To: <56A2511F.1080900@redhat.com>
> On Jan 22, 2016, at 7:56 AM, Rik van Riel <riel@redhat.com> wrote:
>
> Hi,
>
> I am trying to gauge interest in discussing VM containers at the LSF/MM
> summit this year. Projects like ClearLinux, Qubes, and others are all
> trying to use virtual machines as better isolated containers.
>
> That changes some of the goals the memory management subsystem has,
> from "use all the resources effectively" to "use as few resources as
> necessary, in case the host needs the memory for something else".
>
> These VMs could be as small as running just one application, so this
> goes a little further than simply trying to squeeze more virtual
> machines into a system with frontswap and clean cache.
I would be very interested in discussing this topic, and I agree that "a topic exploring paravirt interfaces for anonymous memory would be really useful" (as James pointed out).
Beyond memory consumption, I would be interested whether we can harden the kernel by the paravirt interfaces for memory protection in VMs (if any). For example, the hypervisor could write-protect part of the page tables or kernel data structures in VMs, and does it help?
>
> Single-application VM sandboxes could also get their data differently,
> using (partial) host filesystem passthrough, instead of a virtual
> block device. This may change the relative utility of caching data
> inside the guest page cache, versus freeing up that memory and
> allowing the host to use it to cache things.
>
> Are people interested in discussing this at LSF/MM, or is it better
> saved for a different forum?
In my view, it’s worth discussing the details focusing on memory and storage. It would be good if we can discuss other areas in a different forum, such as CPU scheduling and network. For example, the cost of context switching becomes higher as applications run in more (small) VMs because that tends to incur more VM exits.
---
Jun
Intel Open Source Technology Center
next prev parent reply other threads:[~2016-01-23 23:41 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-22 15:56 Rik van Riel
2016-01-22 16:05 ` [Lsf-pc] " James Bottomley
2016-01-22 17:11 ` Johannes Weiner
2016-01-27 15:48 ` Vladimir Davydov
2016-01-27 18:36 ` Johannes Weiner
2016-01-28 17:12 ` Vladimir Davydov
2016-01-23 23:41 ` Nakajima, Jun [this message]
2016-01-24 17:06 ` One Thousand Gnomes
2016-01-25 17:25 ` Rik van Riel
2016-01-28 15:18 ` Aneesh Kumar K.V
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=439BF796-53D3-48C9-8578-A0733DDE8001@intel.com \
--to=jun.nakajima@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linuxfoundation.org \
--cc=riel@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox