From: Shivank Garg <shivankg@amd.com>
To: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
Thomas Gleixner <tglx@linutronix.de>
Cc: ardb@kernel.org, bp@alien8.de, brijesh.singh@amd.com,
corbet@lwn.net, dave.hansen@linux.intel.com, hpa@zytor.com,
jan.kiszka@siemens.com, jgross@suse.com, kbingham@kernel.org,
linux-doc@vger.kernel.org, linux-efi@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
luto@kernel.org, michael.roth@amd.com, mingo@redhat.com,
peterz@infradead.org, rick.p.edgecombe@intel.com,
sandipan.das@amd.com, thomas.lendacky@amd.com, x86@kernel.org
Subject: Re: [PATCH 0/3] x86: Make 5-level paging support unconditional for x86-64
Date: Wed, 31 Jul 2024 23:15:24 +0530 [thread overview]
Message-ID: <5b031938-9c82-4f09-b5dc-c45bc7fe6e07@amd.com> (raw)
In-Reply-To: <jczq52e6vrluqobqzejakdo3mdxqiqohdzbwmq64uikrm2h52n@l2bgf4ir7pj6>
On 7/31/2024 5:06 PM, Kirill A. Shutemov wrote:
> On Wed, Jul 31, 2024 at 11:15:05AM +0200, Thomas Gleixner wrote:
>> On Wed, Jul 31 2024 at 14:27, Shivank Garg wrote:
>>> lmbench:lat_pagefault: Metric- page-fault time (us) - Lower is better
>>> 4-Level PT 5-Level PT % Change
>>> THP-never Mean:0.4068 Mean:0.4294 5.56
>>> 95% CI:0.4057-0.4078 95% CI:0.4287-0.4302
>>>
>>> THP-Always Mean: 0.4061 Mean: 0.4288 % Change
>>> 95% CI: 0.4051-0.4071 95% CI: 0.4281-0.4295 5.59
>>>
>>> Inference:
>>> 5-level page table shows increase in page-fault latency but it does
>>> not significantly impact other benchmarks.
>>
>> 5% regression on lmbench is a NONO.
>
> Yeah, that's a biggy.
>
> In our testing (on Intel HW) we didn't see any significant difference
> between 4- and 5-level paging. But we were focused on TLB fill latency.
> In both bare metal and in VMs. Maybe something wrong in the fault path?
>
> It requires a closer look.
>
> Shivank, could you share how you run lat_pagefault? What file size? How
> parallel you run it?...
Hi Kirill,
I got lmbench from here:
https://github.com/foss-for-synopsys-dwc-arc-processors/lmbench/blob/master/src/lat_pagefault.c
and using this command:
numactl --membind=1 --cpunodebind=1 bin/x86_64-linux-gnu/lat_pagefault -N 100 1GB_dev_urandom_file
>
> It would also be nice to get perf traces. Maybe it is purely SW issue.
>
4-level-page-table:
- 52.31% benchmark
- 49.52% asm_exc_page_fault
- 49.35% exc_page_fault
- 48.36% do_user_addr_fault
- 46.15% handle_mm_fault
- 44.59% __handle_mm_fault
- 42.95% do_fault
- 40.89% filemap_map_pages
- 28.30% set_pte_range
- 23.70% folio_add_file_rmap_ptes
- 14.30% __lruvec_stat_mod_folio
- 10.12% __mod_lruvec_state
- 5.70% __mod_memcg_lruvec_state
0.60% cgroup_rstat_updated
1.06% __mod_node_page_state
2.84% __rcu_read_unlock
0.76% srso_alias_safe_ret
0.84% set_ptes.isra.0
- 5.48% next_uptodate_folio
- 1.19% xas_find
0.96% xas_load
1.00% set_ptes.isra.0
1.22% lock_vma_under_rcu
5-level-page-table:
- 52.75% benchmark
- 50.04% asm_exc_page_fault
- 49.90% exc_page_fault
- 48.91% do_user_addr_fault
- 46.74% handle_mm_fault
- 45.27% __handle_mm_fault
- 43.30% do_fault
- 41.58% filemap_map_pages
- 28.04% set_pte_range
- 22.77% folio_add_file_rmap_ptes
- 17.74% __lruvec_stat_mod_folio
- 10.89% __mod_lruvec_state
- 5.97% __mod_memcg_lruvec_state
1.94% cgroup_rstat_updated
1.09% __mod_node_page_state
0.56% __mod_node_page_state
2.28% __rcu_read_unlock
1.08% set_ptes.isra.0
- 5.94% next_uptodate_folio
- 1.13% xas_find
0.99% xas_load
1.13% srso_alias_safe_ret
0.52% set_ptes.isra.0
1.16% lock_vma_under_rcu
>> 5-level page tables add a cost in every hardware page table walk. That's
>> a matter of fact and there is absolutely no reason to inflict this cost
>> on everyone.
>>
>> The solution to this to make the 5-level mechanics smarter by evaluating
>> whether the machine has enough memory to require 5-level tables and
>> select the depth at boot time.
>
> Let's understand the reason first.
Sure, please let me know how can I help in this debug.
Thanks,
Shivank
>
> The risk with your proposal is that 5-level paging will not get any
> testing and rot over time.
>
> I would like to keep it on, if possible.
>
next prev parent reply other threads:[~2024-07-31 17:45 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-31 8:57 Shivank Garg
2024-07-31 9:15 ` Thomas Gleixner
2024-07-31 11:11 ` Peter Zijlstra
2024-07-31 11:36 ` Kirill A. Shutemov
2024-07-31 11:40 ` Peter Zijlstra
2024-07-31 17:45 ` Shivank Garg [this message]
2024-10-31 15:36 ` Dave Hansen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5b031938-9c82-4f09-b5dc-c45bc7fe6e07@amd.com \
--to=shivankg@amd.com \
--cc=ardb@kernel.org \
--cc=bp@alien8.de \
--cc=brijesh.singh@amd.com \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=hpa@zytor.com \
--cc=jan.kiszka@siemens.com \
--cc=jgross@suse.com \
--cc=kbingham@kernel.org \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-efi@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=michael.roth@amd.com \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rick.p.edgecombe@intel.com \
--cc=sandipan.das@amd.com \
--cc=tglx@linutronix.de \
--cc=thomas.lendacky@amd.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox