linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Nikita Kalyazin <kalyazin@amazon.com>
To: David Hildenbrand <david@redhat.com>, <willy@infradead.org>,
	<pbonzini@redhat.com>, <linux-fsdevel@vger.kernel.org>,
	<linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>,
	<kvm@vger.kernel.org>
Cc: <michael.day@amd.com>, <jthoughton@google.com>,
	<michael.roth@amd.com>, <ackerleytng@google.com>,
	<graf@amazon.de>, <jgowans@amazon.com>, <roypat@amazon.co.uk>,
	<derekmn@amazon.com>, <nsaenz@amazon.es>, <xmarcalx@amazon.com>
Subject: Re: [RFC PATCH 0/2] mm: filemap: add filemap_grab_folios
Date: Tue, 14 Jan 2025 16:07:45 +0000	[thread overview]
Message-ID: <9a188baa-034b-4dd5-b90e-7182f1fbaec6@amazon.com> (raw)
In-Reply-To: <5c62bdbb-7a4e-4178-8c03-e84491d8d150@redhat.com>

On 13/01/2025 12:20, David Hildenbrand wrote:
> On 10.01.25 19:54, Nikita Kalyazin wrote:
>> On 10/01/2025 17:01, David Hildenbrand wrote:
>>> On 10.01.25 16:46, Nikita Kalyazin wrote:
>>>> Based on David's suggestion for speeding up guest_memfd memory
>>>> population [1] made at the guest_memfd upstream call on 5 Dec 2024 [2],
>>>> this adds `filemap_grab_folios` that grabs multiple folios at a time.
>>>>
>>>
>>> Hi,
>>
>> Hi :)
>>
>>>
>>>> Motivation
>>>>
>>>> When profiling guest_memfd population and comparing the results with
>>>> population of anonymous memory via UFFDIO_COPY, I observed that the
>>>> former was up to 20% slower, mainly due to adding newly allocated pages
>>>> to the pagecache.  As far as I can see, the two main contributors to it
>>>> are pagecache locking and tree traversals needed for every folio.  The
>>>> RFC attempts to partially mitigate those by adding multiple folios at a
>>>> time to the pagecache.
>>>>
>>>> Testing
>>>>
>>>> With the change applied, I was able to observe a 10.3% (708 to 635 ms)
>>>> speedup in a selftest that populated 3GiB guest_memfd and a 9.5% 
>>>> (990 to
>>>> 904 ms) speedup when restoring a 3GiB guest_memfd VM snapshot using a
>>>> custom Firecracker version, both on Intel Ice Lake.
>>>
>>> Does that mean that it's still 10% slower (based on the 20% above), or
>>> were the 20% from a different micro-benchmark?
>>
>> Yes, it is still slower:
>>    - isolated/selftest: 2.3%
>>    - Firecracker setup: 8.9%
>>
>> Not sure why the values are so different though.  I'll try to find an
>> explanation.
> 
> The 2.3% looks very promising.

It does.  I sorted out my Firecracker setup and saw a similar figure 
there, which made me more confident.

>>
>>>>
>>>> Limitations
>>>>
>>>> While `filemap_grab_folios` handles THP/large folios internally and
>>>> deals with reclaim artifacts in the pagecache (shadows), for simplicity
>>>> reasons, the RFC does not support those as it demonstrates the
>>>> optimisation applied to guest_memfd, which only uses small folios and
>>>> does not support reclaim at the moment.
>>>
>>> It might be worth pointing out that, while support for larger folios is
>>> in the works, there will be scenarios where small folios are unavoidable
>>> in the future (mixture of shared and private memory).
>>>
>>> How hard would it be to just naturally support large folios as well?
>>
>> I don't think it's going to be impossible.  It's just one more dimension
>> that needs to be handled.  `__filemap_add_folio` logic is already rather
>> complex, and processing multiple folios while also splitting when
>> necessary correctly looks substantially convoluted to me.  So my idea
>> was to discuss/validate the multi-folio approach first before rolling
>> the sleeves up.
> 
> We should likely try making this as generic as possible, meaning we'll
> support roughly what filemap_grab_folio() would have supported (e.g., 
> also large folios).
> 
> Now I find filemap_get_folios_contig() [thas is already used in memfd 
> code],
> and wonder if that could be reused/extended fairly easily.

Fair, I will see into how it could be made generic.

> -- 
> Cheers,
> 
> David / dhildenb
> 



      reply	other threads:[~2025-01-14 16:07 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-10 15:46 Nikita Kalyazin
2025-01-10 15:46 ` [RFC PATCH 1/2] " Nikita Kalyazin
2025-01-10 15:46 ` [RFC PATCH 2/2] KVM: guest_memfd: use filemap_grab_folios in write Nikita Kalyazin
2025-01-10 21:08   ` Mike Day
2025-01-14 16:08     ` Nikita Kalyazin
2025-01-10 17:01 ` [RFC PATCH 0/2] mm: filemap: add filemap_grab_folios David Hildenbrand
2025-01-10 18:54   ` Nikita Kalyazin
2025-01-13 12:20     ` David Hildenbrand
2025-01-14 16:07       ` Nikita Kalyazin [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9a188baa-034b-4dd5-b90e-7182f1fbaec6@amazon.com \
    --to=kalyazin@amazon.com \
    --cc=ackerleytng@google.com \
    --cc=david@redhat.com \
    --cc=derekmn@amazon.com \
    --cc=graf@amazon.de \
    --cc=jgowans@amazon.com \
    --cc=jthoughton@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=michael.day@amd.com \
    --cc=michael.roth@amd.com \
    --cc=nsaenz@amazon.es \
    --cc=pbonzini@redhat.com \
    --cc=roypat@amazon.co.uk \
    --cc=willy@infradead.org \
    --cc=xmarcalx@amazon.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox