From: Ingo Molnar <mingo@kernel.org>
To: Matteo Rizzo <matteorizzo@google.com>
Cc: "Lameter, Christopher" <cl@os.amperecomputing.com>,
Dave Hansen <dave.hansen@intel.com>,
penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com,
akpm@linux-foundation.org, vbabka@suse.cz,
roman.gushchin@linux.dev, 42.hyeyoo@gmail.com,
keescook@chromium.org, linux-kernel@vger.kernel.org,
linux-doc@vger.kernel.org, linux-mm@kvack.org,
linux-hardening@vger.kernel.org, tglx@linutronix.de,
mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com,
x86@kernel.org, hpa@zytor.com, corbet@lwn.net, luto@kernel.org,
peterz@infradead.org, jannh@google.com, evn@google.com,
poprdi@google.com, jordyzomer@google.com,
Linus Torvalds <torvalds@linux-foundation.org>
Subject: Re: [RFC PATCH 00/14] Prevent cross-cache attacks in the SLUB allocator
Date: Wed, 20 Sep 2023 09:44:33 +0200 [thread overview]
Message-ID: <ZQqi4RqpEM7PRGkF@gmail.com> (raw)
In-Reply-To: <CAHKB1wKneke-dyvMY0JtW-xwW8m=GaUdafoAqdCE0B9csY7_bw@mail.gmail.com>
* Matteo Rizzo <matteorizzo@google.com> wrote:
> On Mon, 18 Sept 2023 at 19:39, Ingo Molnar <mingo@kernel.org> wrote:
> >
> > What's the split of the increase in overhead due to SLAB_VIRTUAL=y, between
> > user-space execution and kernel-space execution?
> >
>
> Same benchmark as before (compiling a kernel on a system running the patched
> kernel):
>
> Intel Skylake:
>
> LABEL | COUNT | MIN | MAX | MEAN | MEDIAN | STDDEV
> ---------------+-------+----------+----------+----------+----------+--------
> wall clock | | | | | |
> SLAB_VIRTUAL=n | 150 | 49.700 | 51.320 | 50.449 | 50.430 | 0.29959
> SLAB_VIRTUAL=y | 150 | 50.020 | 51.660 | 50.880 | 50.880 | 0.30495
> | | +0.64% | +0.66% | +0.85% | +0.89% | +1.79%
> system time | | | | | |
> SLAB_VIRTUAL=n | 150 | 358.560 | 362.900 | 360.922 | 360.985 | 0.91761
> SLAB_VIRTUAL=y | 150 | 362.970 | 367.970 | 366.062 | 366.115 | 1.015
> | | +1.23% | +1.40% | +1.42% | +1.42% | +10.60%
> user time | | | | | |
> SLAB_VIRTUAL=n | 150 | 3110.000 | 3124.520 | 3118.143 | 3118.120 | 2.466
> SLAB_VIRTUAL=y | 150 | 3115.070 | 3127.070 | 3120.762 | 3120.925 | 2.654
> | | +0.16% | +0.08% | +0.08% | +0.09% | +7.63%
These Skylake figures are a bit counter-intuitive: how does an increase of
only +0.08% user-time - which dominates 89.5% of execution, combined with a
+1.42% increase in system time that consumes only 10.5% of CPU capacity,
result in a +0.85% increase in wall-clock time?
There might be hidden factors at work in the DMA space, as Linus suggested?
Or perhaps wall-clock time is dominated by the single-threaded final link
time of the kernel, which phase might be disproportionately hurt by these
changes?
(Stddev seems low enough for this not to be a measurement artifact.)
The AMD Milan figures are more intuitive:
> AMD Milan:
>
> LABEL | COUNT | MIN | MAX | MEAN | MEDIAN | STDDEV
> ---------------+-------+----------+----------+----------+----------+--------
> wall clock | | | | | |
> SLAB_VIRTUAL=n | 150 | 25.480 | 26.550 | 26.065 | 26.055 | 0.23495
> SLAB_VIRTUAL=y | 150 | 25.820 | 27.080 | 26.531 | 26.540 | 0.25974
> | | +1.33% | +2.00% | +1.79% | +1.86% | +10.55%
> system time | | | | | |
> SLAB_VIRTUAL=n | 150 | 478.530 | 540.420 | 520.803 | 521.485 | 9.166
> SLAB_VIRTUAL=y | 150 | 530.520 | 572.460 | 552.825 | 552.985 | 7.161
> | | +10.86% | +5.93% | +6.15% | +6.04% | -21.88%
> user time | | | | | |
> SLAB_VIRTUAL=n | 150 | 2373.540 | 2403.800 | 2386.343 | 2385.840 | 5.325
> SLAB_VIRTUAL=y | 150 | 2388.690 | 2426.290 | 2408.325 | 2408.895 | 6.667
> | | +0.64% | +0.94% | +0.92% | +0.97% | +25.20%
>
>
> I'm not exactly sure why user time increases by almost 1% on Milan, it
> could be TLB contention.
The other worrying aspect is the increase of +6.15% of system time ...
which is roughly in line with what we'd expect from a +1.79% increase in
wall-clock time.
Thanks,
Ingo
next prev parent reply other threads:[~2023-09-20 7:44 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-15 10:59 Matteo Rizzo
2023-09-15 10:59 ` [RFC PATCH 01/14] mm/slub: don't try to dereference invalid freepointers Matteo Rizzo
2023-09-15 20:50 ` Kees Cook
2023-09-30 11:04 ` Hyeonggon Yoo
2023-09-15 10:59 ` [RFC PATCH 02/14] mm/slub: add is_slab_addr/is_slab_page helpers Matteo Rizzo
2023-09-15 20:55 ` Kees Cook
2023-09-15 10:59 ` [RFC PATCH 03/14] mm/slub: move kmem_cache_order_objects to slab.h Matteo Rizzo
2023-09-15 20:56 ` Kees Cook
2023-09-15 10:59 ` [RFC PATCH 04/14] mm: use virt_to_slab instead of folio_slab Matteo Rizzo
2023-09-15 20:59 ` Kees Cook
2023-09-15 10:59 ` [RFC PATCH 05/14] mm/slub: create folio_set/clear_slab helpers Matteo Rizzo
2023-09-15 21:02 ` Kees Cook
2023-09-15 10:59 ` [RFC PATCH 06/14] mm/slub: pass additional args to alloc_slab_page Matteo Rizzo
2023-09-15 21:03 ` Kees Cook
2023-09-15 10:59 ` [RFC PATCH 07/14] mm/slub: pass slab pointer to the freeptr decode helper Matteo Rizzo
2023-09-15 21:06 ` Kees Cook
2023-09-15 10:59 ` [RFC PATCH 08/14] security: introduce CONFIG_SLAB_VIRTUAL Matteo Rizzo
2023-09-15 21:07 ` Kees Cook
2023-09-15 10:59 ` [RFC PATCH 09/14] mm/slub: add the slab freelists to kmem_cache Matteo Rizzo
2023-09-15 21:08 ` Kees Cook
2023-09-15 10:59 ` [RFC PATCH 10/14] x86: Create virtual memory region for SLUB Matteo Rizzo
2023-09-15 21:13 ` Kees Cook
2023-09-15 21:49 ` Dave Hansen
2023-09-18 8:54 ` Matteo Rizzo
2023-09-15 10:59 ` [RFC PATCH 11/14] mm/slub: allocate slabs from virtual memory Matteo Rizzo
2023-09-15 21:22 ` Kees Cook
2023-09-15 21:57 ` Dave Hansen
2023-10-11 9:17 ` Matteo Rizzo
2023-09-15 10:59 ` [RFC PATCH 12/14] mm/slub: introduce the deallocated_pages sysfs attribute Matteo Rizzo
2023-09-15 21:23 ` Kees Cook
2023-09-15 10:59 ` [RFC PATCH 13/14] mm/slub: sanity-check freepointers Matteo Rizzo
2023-09-15 21:26 ` Kees Cook
2023-09-15 10:59 ` [RFC PATCH 14/14] security: add documentation for SLAB_VIRTUAL Matteo Rizzo
2023-09-15 21:34 ` Kees Cook
2023-09-20 9:04 ` Vlastimil Babka
2023-09-15 15:19 ` [RFC PATCH 00/14] Prevent cross-cache attacks in the SLUB allocator Dave Hansen
2023-09-15 16:30 ` Lameter, Christopher
2023-09-18 12:08 ` Matteo Rizzo
2023-09-18 17:39 ` Ingo Molnar
2023-09-18 18:05 ` Linus Torvalds
2023-09-19 15:48 ` Matteo Rizzo
2023-09-19 16:02 ` Dave Hansen
2023-09-19 17:56 ` Kees Cook
2023-09-19 18:49 ` Linus Torvalds
2023-09-19 13:42 ` Matteo Rizzo
2023-09-19 15:56 ` Dave Hansen
2023-09-20 7:44 ` Ingo Molnar [this message]
2023-09-20 8:49 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZQqi4RqpEM7PRGkF@gmail.com \
--to=mingo@kernel.org \
--cc=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=bp@alien8.de \
--cc=cl@os.amperecomputing.com \
--cc=corbet@lwn.net \
--cc=dave.hansen@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=evn@google.com \
--cc=hpa@zytor.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=jannh@google.com \
--cc=jordyzomer@google.com \
--cc=keescook@chromium.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-hardening@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=matteorizzo@google.com \
--cc=mingo@redhat.com \
--cc=penberg@kernel.org \
--cc=peterz@infradead.org \
--cc=poprdi@google.com \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
--cc=vbabka@suse.cz \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox