linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Pasha Tatashin <pasha.tatashin@soleen.com>
To: "Gupta, Pankaj" <pankaj.gupta@amd.com>
Cc: lsf-pc@lists.linux-foundation.org, linux-mm <linux-mm@kvack.org>
Subject: Re: [LSF/MM/BPF TOPIC] Virtual Machine Memory Passthrough
Date: Wed, 22 Feb 2023 15:56:30 -0500	[thread overview]
Message-ID: <CA+CK2bAUUD6KiQdZiqmWKCw+Kgo8roK695qpAUd6mMmUAHnU0Q@mail.gmail.com> (raw)
In-Reply-To: <6070e228-17f1-8495-470d-80dd38963266@amd.com>

On Wed, Feb 22, 2023 at 3:27 PM Gupta, Pankaj <pankaj.gupta@amd.com> wrote:
>
> On 2/22/2023 7:18 PM, Pasha Tatashin wrote:
>
> Hi Pasha,
>
> >> Coming from the virtio-pmem and some free page hinting background, I am
> >> interested in this discussion. I saw your proposal about single owner
> >> memory driver in other thread and could not entirely link the dots about
> >> applicability of the idea with "reducing the memory footprint overhead
> >> for virtual machines". Do we plan to co-ordinate guest memory state with
> >> corresponding host state for efficient memory reclaim decisions?
> >> Or something entirely different we are targeting here?
> >
> > Hi Pankaj,
> >
> > The plan is to have a driver /dev/memctl and corresponding VMM agent
> > that synchronously passes information about how guest would like its
> > memory to be backed on the host.
> >
> > For example the following information can come from guest for a range
> > of physical addresses:
> > MADV_NOHUGEPAGE
> > MADV_HUGEPAGE
> > MADV_DONTNEED
> > PR_SET_VMA_ANON_NAME
> > etc.
> >
> > All together this should help by doing memory management operations
> > only on the host side, and reduce the number of operations that are
> > performed on the guest.
>
> o.k. That sounds like guest will have a *special* interface (paravirt?)
> for some of the memory management operations with the coordination of host.

That is correct, hence memory passthrough.

>
> Guest would still allow other regular memory operations? which would get
> full-filled by the guest? Just wondering if this solution only be
> useful for specific workloads which are aware of known MADV calls?

Depending on the flexibility of the interface, we are currently
working supporting tcmalloc(), and also mmap(MAP_ANONYMOUS), but in
the future can be extended to more types of memory.

> And might not do/require continuous allocation/deallocation of memory?

Contiguous memory allocation on the host is not required.

Thanks,
Pasha


  reply	other threads:[~2023-02-22 20:57 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-20 16:31 Pasha Tatashin
2023-02-20 23:51 ` Gavin Shan
2023-02-22 13:43   ` Pasha Tatashin
2023-02-22 15:31     ` Zi Yan
2023-02-22 15:43       ` Pasha Tatashin
2023-02-21  4:38 ` Zhu Yanjun
2023-02-22 13:44   ` Pasha Tatashin
2023-02-22 17:08 ` Gupta, Pankaj
2023-02-22 18:18   ` Pasha Tatashin
2023-02-22 20:27     ` Gupta, Pankaj
2023-02-22 20:56       ` Pasha Tatashin [this message]
2023-02-23  9:11         ` Gupta, Pankaj

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CA+CK2bAUUD6KiQdZiqmWKCw+Kgo8roK695qpAUd6mMmUAHnU0Q@mail.gmail.com \
    --to=pasha.tatashin@soleen.com \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=pankaj.gupta@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox