linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Nikita Kalyazin <kalyazin@amazon.com>
To: <pbonzini@redhat.com>, <corbet@lwn.net>, <kvm@vger.kernel.org>,
	<linux-doc@vger.kernel.org>, <linux-kernel@vger.kernel.org>
Cc: <jthoughton@google.com>, <brijesh.singh@amd.com>,
	<michael.roth@amd.com>, <graf@amazon.de>, <jgowans@amazon.com>,
	<roypat@amazon.co.uk>, <derekmn@amazon.com>, <nsaenz@amazon.es>,
	<xmarcalx@amazon.com>, "David Hildenbrand" <david@redhat.com>,
	Sean Christopherson <seanjc@google.com>, <linux-mm@kvack.org>
Subject: Re: [RFC PATCH 0/4] KVM: ioctl for populating guest_memfd
Date: Wed, 20 Nov 2024 12:09:05 +0000	[thread overview]
Message-ID: <08aeaf6e-dc89-413a-86a6-b9772c9b2faf@amazon.com> (raw)
In-Reply-To: <20241024095429.54052-1-kalyazin@amazon.com>

On 24/10/2024 10:54, Nikita Kalyazin wrote:
> [2] proposes an alternative to
> UserfaultFD for intercepting stage-2 faults, while this series
> conceptually compliments it with the ability to populate guest memory
> backed by guest_memfd for `KVM_X86_SW_PROTECTED_VM` VMs.

+David
+Sean
+mm

While measuring memory population performance of guest_memfd using this 
series, I noticed that guest_memfd population takes longer than my 
baseline, which is filling anonymous private memory via UFFDIO_COPY.

I am using x86_64 for my measurements and 3 GiB memory region:
  - anon/private UFFDIO_COPY:  940 ms
  - guest_memfd:              1371 ms (+46%)

It turns out that the effect is observable not only for guest_memfd, but 
also for any type of shared memory, eg memfd or anonymous memory mapped 
as shared.

Below are measurements of a plain mmap(MAP_POPULATE) operation:

mmap(NULL, 3ll * (1 << 30), PROT_READ | PROT_WRITE, MAP_PRIVATE | 
MAP_ANONYMOUS | MAP_POPULATE, -1, 0);
  vs
mmap(NULL, 3ll * (1 << 30), PROT_READ | PROT_WRITE, MAP_SHARED | 
MAP_ANONYMOUS | MAP_POPULATE, -1, 0);

Results:
  - MAP_PRIVATE: 968 ms
  - MAP_SHARED: 1646 ms

I am seeing this effect on a range of kernels. The oldest I used was 
5.10, the newest is the current kvm-next (for-linus-2590-gd96c77bd4eeb).

When profiling with perf, I observe the following hottest operations 
(kvm-next). Attaching full distributions at the end of the email.

MAP_PRIVATE:
- 19.72% clear_page_erms, rep stos %al,%es:(%rdi)

MAP_SHARED:
- 43.94% shmem_get_folio_gfp, lock orb $0x8,(%rdi), which is atomic 
setting of the PG_uptodate bit
- 10.98% clear_page_erms, rep stos %al,%es:(%rdi)

Note that MAP_PRIVATE/do_anonymous_page calls __folio_mark_uptodate that 
sets the PG_uptodate bit regularly.
, while MAP_SHARED/shmem_get_folio_gfp calls folio_mark_uptodate that 
sets the PG_uptodate bit atomically.

While this logic is intuitive, its performance effect is more 
significant that I would expect.

The questions are:
  - Is this a well-known behaviour?
  - Is there a way to mitigate that, ie make shared memory (including 
guest_memfd) population faster/comparable to private memory?

Nikita


Appendix: full call tree obtained via perf

MAP_RPIVATE:

       - 87.97% __mmap
            entry_SYSCALL_64_after_hwframe
            do_syscall_64
            vm_mmap_pgoff
            __mm_populate
            populate_vma_page_range
          - __get_user_pages
             - 77.94% handle_mm_fault
                - 76.90% __handle_mm_fault
                   - 72.70% do_anonymous_page
                      - 31.92% vma_alloc_folio_noprof
                         - 30.74% alloc_pages_mpol_noprof
                            - 29.60% __alloc_pages_noprof
                               - 28.40% get_page_from_freelist
                                    19.72% clear_page_erms
                                  - 3.00% __rmqueue_pcplist
                                       __mod_zone_page_state
                                    1.18% _raw_spin_trylock
                      - 20.03% __pte_offset_map_lock
                         - 15.96% _raw_spin_lock
                              1.50% preempt_count_add
                         - 2.27% __pte_offset_map
                              __rcu_read_lock
                      - 7.22% __folio_batch_add_and_move
                         - 4.68% folio_batch_move_lru
                            - 3.77% lru_add
                               + 0.95% __mod_zone_page_state
                                 0.86% __mod_node_page_state
                           0.84% folios_put_refs
                           0.55% check_preemption_disabled
                      - 2.85% folio_add_new_anon_rmap
                         - __folio_mod_stat
                              __mod_node_page_state
                   - 1.15% pte_offset_map_nolock
                        __pte_offset_map
             - 7.59% follow_page_pte
                - 4.56% __pte_offset_map_lock
                   - 2.27% _raw_spin_lock
                        preempt_count_add
                     1.13% __pte_offset_map
                  0.75% folio_mark_accessed

MAP_SHARED:

       - 77.89% __mmap
            entry_SYSCALL_64_after_hwframe
            do_syscall_64
            vm_mmap_pgoff
            __mm_populate
            populate_vma_page_range
          - __get_user_pages
             - 72.11% handle_mm_fault
                - 71.67% __handle_mm_fault
                   - 69.62% do_fault
                      - 44.61% __do_fault
                         - shmem_fault
                            - 43.94% shmem_get_folio_gfp
                               - 17.20% 
shmem_alloc_and_add_folio.constprop.0
                                  - 5.10% shmem_alloc_folio
                                     - 4.58% folio_alloc_mpol_noprof
                                        - alloc_pages_mpol_noprof
                                           - 4.00% __alloc_pages_noprof
                                              - 3.31% get_page_from_freelist
                                                   1.24% __rmqueue_pcplist
                                  - 5.07% shmem_add_to_page_cache
                                     - 1.44% __mod_node_page_state
                                          0.61% check_preemption_disabled
                                       0.78% xas_store
                                       0.74% xas_find_conflict
                                       0.66% _raw_spin_lock_irq
                                  - 3.96% __folio_batch_add_and_move
                                     - 2.41% folio_batch_move_lru
                                          1.88% lru_add
                                  - 1.56% shmem_inode_acct_blocks
                                     - 1.24% __dquot_alloc_space
                                        - 0.77% inode_add_bytes
                                             _raw_spin_lock
                                  - 0.77% shmem_recalc_inode
                                       _raw_spin_lock
                                 10.98% clear_page_erms
                               - 1.17% filemap_get_entry
                                    0.78% xas_load
                      - 20.26% filemap_map_pages
                         - 12.23% next_uptodate_folio
                            - 1.27% xas_find
                                 xas_load
                         - 1.16% __pte_offset_map_lock
                              0.59% _raw_spin_lock
                      - 3.48% finish_fault
                         - 1.28% set_pte_range
                              0.96% folio_add_file_rmap_ptes
                         - 0.91% __pte_offset_map_lock
                              0.54% _raw_spin_lock
                     0.57% pte_offset_map_nolock
             - 4.11% follow_page_pte
                - 2.36% __pte_offset_map_lock
                   - 1.32% _raw_spin_lock
                        preempt_count_add
                     0.54% __pte_offset_map


       reply	other threads:[~2024-11-20 12:09 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20241024095429.54052-1-kalyazin@amazon.com>
2024-11-20 12:09 ` Nikita Kalyazin [this message]
2024-11-20 13:46   ` David Hildenbrand
2024-11-20 15:13     ` David Hildenbrand
2024-11-20 15:58       ` Nikita Kalyazin
2024-11-20 16:20         ` David Hildenbrand
2024-11-20 16:44           ` David Hildenbrand
2024-11-20 17:21             ` Nikita Kalyazin
2024-11-20 18:29               ` David Hildenbrand
2024-11-21 16:46                 ` Nikita Kalyazin
2024-11-26 16:04                   ` Nikita Kalyazin
2024-11-28 12:11                     ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=08aeaf6e-dc89-413a-86a6-b9772c9b2faf@amazon.com \
    --to=kalyazin@amazon.com \
    --cc=brijesh.singh@amd.com \
    --cc=corbet@lwn.net \
    --cc=david@redhat.com \
    --cc=derekmn@amazon.com \
    --cc=graf@amazon.de \
    --cc=jgowans@amazon.com \
    --cc=jthoughton@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=michael.roth@amd.com \
    --cc=nsaenz@amazon.es \
    --cc=pbonzini@redhat.com \
    --cc=roypat@amazon.co.uk \
    --cc=seanjc@google.com \
    --cc=xmarcalx@amazon.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox