From: "Gupta, Pankaj" <pankaj.gupta@amd.com>
To: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: lsf-pc@lists.linux-foundation.org, linux-mm <linux-mm@kvack.org>,
David Hildenbrand <david@redhat.com>
Subject: Re: [LSF/MM/BPF TOPIC] Virtual Machine Memory Passthrough
Date: Thu, 23 Feb 2023 10:11:46 +0100 [thread overview]
Message-ID: <5c6ae46f-63e8-7582-01d2-edac1dd45bc9@amd.com> (raw)
In-Reply-To: <CA+CK2bAUUD6KiQdZiqmWKCw+Kgo8roK695qpAUd6mMmUAHnU0Q@mail.gmail.com>
>> Hi Pasha,
>>
>>>> Coming from the virtio-pmem and some free page hinting background, I am
>>>> interested in this discussion. I saw your proposal about single owner
>>>> memory driver in other thread and could not entirely link the dots about
>>>> applicability of the idea with "reducing the memory footprint overhead
>>>> for virtual machines". Do we plan to co-ordinate guest memory state with
>>>> corresponding host state for efficient memory reclaim decisions?
>>>> Or something entirely different we are targeting here?
>>>
>>> Hi Pankaj,
>>>
>>> The plan is to have a driver /dev/memctl and corresponding VMM agent
>>> that synchronously passes information about how guest would like its
>>> memory to be backed on the host.
>>>
>>> For example the following information can come from guest for a range
>>> of physical addresses:
>>> MADV_NOHUGEPAGE
>>> MADV_HUGEPAGE
>>> MADV_DONTNEED
>>> PR_SET_VMA_ANON_NAME
>>> etc.
>>>
>>> All together this should help by doing memory management operations
>>> only on the host side, and reduce the number of operations that are
>>> performed on the guest.
>>
>> o.k. That sounds like guest will have a *special* interface (paravirt?)
>> for some of the memory management operations with the coordination of host.
>
> That is correct, hence memory passthrough.
ya.
>
>>
>> Guest would still allow other regular memory operations? which would get
>> full-filled by the guest? Just wondering if this solution only be
>> useful for specific workloads which are aware of known MADV calls?
>
> Depending on the flexibility of the interface, we are currently
> working supporting tcmalloc(), and also mmap(MAP_ANONYMOUS), but in
> the future can be extended to more types of memory.
Not sure if its worth to extend the existing paravirt memory management
interfaces like virtio-mem or virtio-balloon or create a new driver
altogether? Adding David (in Cc) for his thoughts.
Thanks,
Pankaj
>
>> And might not do/require continuous allocation/deallocation of memory?
>
> Contiguous memory allocation on the host is not required.
>
> Thanks,
> Pasha
prev parent reply other threads:[~2023-02-23 9:12 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-20 16:31 Pasha Tatashin
2023-02-20 23:51 ` Gavin Shan
2023-02-22 13:43 ` Pasha Tatashin
2023-02-22 15:31 ` Zi Yan
2023-02-22 15:43 ` Pasha Tatashin
2023-02-21 4:38 ` Zhu Yanjun
2023-02-22 13:44 ` Pasha Tatashin
2023-02-22 17:08 ` Gupta, Pankaj
2023-02-22 18:18 ` Pasha Tatashin
2023-02-22 20:27 ` Gupta, Pankaj
2023-02-22 20:56 ` Pasha Tatashin
2023-02-23 9:11 ` Gupta, Pankaj [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5c6ae46f-63e8-7582-01d2-edac1dd45bc9@amd.com \
--to=pankaj.gupta@amd.com \
--cc=david@redhat.com \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=pasha.tatashin@soleen.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox