linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Alexandre Chartre <alexandre.chartre@oracle.com>
To: lsf-pc@lists.linux-foundation.org
Cc: jwadams@google.com, James.Bottomley@hansenpartnership.com,
	rppt@linux.ibm.com, linux-mm@kvack.org, pjt@google.com,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Alexandre Chartre <alexandre.chartre@oracle.com>
Subject: [LSF/MM ATTEND] Address Space Isolation for KVM
Date: Thu, 28 Mar 2019 18:18:57 +0100	[thread overview]
Message-ID: <806eb206-d0e4-3362-4e33-d9563269f016@oracle.com> (raw)


Hi,

I am from the Oracle Linux kernel and virtualization team, and I am investigating
address space isolation inside the kernel.

I am working on a set of patches to have parts of KVM run with a subset of the kernel
address space. The ultimate goal would be to be able to run KVM code between a VMExit
and the next VMResume with a limited address space (containing only non-sensitive data),
in order to prevent any potential data stealing attack.

I am in conversation with Jonathan Adams about this work, and we would like guidance
from the community about this idea, and feedback about the patches currently in progress.
I will be happy to co-join to present the work done so far, to discuss problems and
challenges we are facing, and brainstorm any idea about address space isolation inside
the kernel.

Here is an overview of the changes being made:

  - add functions to copy page tables entries at different level (PGD, P4D, PUD, PMD, PTE),
    and corresponding functions for clearing/free.

  - add a dedicated mm to kvm (kvm_mm) for reducing the address space. kvm_mm is built
    by copying mapping from init_mm. The challenge is to identify the minimal set of
    data to map so that the task can at least run switch_mm() to switch the page table
    (and switch back). Current mappings are: the entire kernel text, per-cpu memory,
    cpu entry area, %esp fixup stacks, the task running kvm (with its stack, mm and pgd),
    kvm module, kvm_intel module, kvm vmx (with its kvm struct, pml_pg, guest_msrs,
    vmcs01.vmcs), vmx_l1d_flush_pages.

  - add a page fault handler to report unmapped data when running with the KVM reduced
    address space. The handler automatically switches to the full kernel address space.
    This is based on an original idea from Paul Turner.

  - add switches to the kvm address space (before VMResume) and back to the kernel address
    space when we may access sensitive data (for example when exiting the vcpu_run() loop,
    in interrupts, when we are scheduled...). This is based on original work from Liran Alon.


Thanks for your consideration. I will be happy to provide more information and to join any
discussion about this topic.


Rgds,

alex.


                 reply	other threads:[~2019-03-28 17:19 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=806eb206-d0e4-3362-4e33-d9563269f016@oracle.com \
    --to=alexandre.chartre@oracle.com \
    --cc=James.Bottomley@hansenpartnership.com \
    --cc=jwadams@google.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=pjt@google.com \
    --cc=rppt@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox