From: Nadav Amit <nadav.amit@gmail.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Matthew Wilcox <willy@infradead.org>,
Vlastimil Babka <vbabka@suse.cz>, Linux-MM <linux-mm@kvack.org>
Subject: Re: Number of arguments in vmalloc.c
Date: Fri, 7 Dec 2018 15:12:56 -0800 [thread overview]
Message-ID: <C29C792A-3F47-482D-B0D8-99EABEDF8882@gmail.com> (raw)
In-Reply-To: <20181207084550.GA2237@hirez.programming.kicks-ass.net>
[ We can start a new thread, since I have the tendency to hijack threads. ]
> On Dec 7, 2018, at 12:45 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Thu, Dec 06, 2018 at 09:26:24AM -0800, Nadav Amit wrote:
>>> On Dec 6, 2018, at 2:25 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>>>
>>> On Thu, Dec 06, 2018 at 12:28:26AM -0800, Nadav Amit wrote:
>>>> [ +Peter ]
>>>>
>>>> So I dug some more (I’m still not done), and found various trivial things
>>>> (e.g., storing zero extending u32 immediate is shorter for registers,
>>>> inlining already takes place).
>>>>
>>>> *But* there is one thing that may require some attention - patch
>>>> b59167ac7bafd ("x86/percpu: Fix this_cpu_read()”) set ordering constraints
>>>> on the VM_ARGS() evaluation. And this patch also imposes, it appears,
>>>> (unnecessary) constraints on other pieces of code.
>>>>
>>>> These constraints are due to the addition of the volatile keyword for
>>>> this_cpu_read() by the patch. This affects at least 68 functions in my
>>>> kernel build, some of which are hot (I think), e.g., finish_task_switch(),
>>>> smp_x86_platform_ipi() and select_idle_sibling().
>>>>
>>>> Peter, perhaps the solution was too big of a hammer? Is it possible instead
>>>> to create a separate "this_cpu_read_once()” with the volatile keyword? Such
>>>> a function can be used for native_sched_clock() and other seqlocks, etc.
>>>
>>> No. like the commit writes this_cpu_read() _must_ imply READ_ONCE(). If
>>> you want something else, use something else, there's plenty other
>>> options available.
>>>
>>> There's this_cpu_op_stable(), but also __this_cpu_read() and
>>> raw_this_cpu_read() (which currently don't differ from this_cpu_read()
>>> but could).
>>
>> Would setting the inline assembly memory operand both as input and output be
>> better than using the “volatile”?
>
> I don't know.. I'm forever befuddled by the exact semantics of gcc
> inline asm.
>
>> I think that If you do that, the compiler would should the this_cpu_read()
>> as something that changes the per-cpu-variable, which would make it invalid
>> to re-read the value. At the same time, it would not prevent reordering the
>> read with other stuff.
>
> So the thing is; as I wrote, the generic version of this_cpu_*() is:
>
> local_irq_save();
> __this_cpu_*();
> local_irq_restore();
>
> And per local_irq_{save,restore}() including compiler barriers that
> cannot be reordered around either.
>
> And per the principle of least surprise, I think our primitives should
> have similar semantics.
I guess so, but as you’ll see below, the end result is ugly.
> I'm actually having difficulty finding the this_cpu_read() in any of the
> functions you mention, so I cannot make any concrete suggestions other
> than pointing at the alternative functions available.
So I got deeper into the code to understand a couple of differences. In the
case of select_idle_sibling(), the patch (Peter’s) increase the function
code size by 123 bytes (over the baseline of 986). The per-cpu variable is
called through the following call chain:
select_idle_sibling()
=> select_idle_cpu()
=> local_clock()
=> raw_smp_processor_id()
And results in 2 more calls to sched_clock_cpu(), as the compiler assumes
the processor id changes in between (which obviously wouldn’t happen). There
may be more changes around, which I didn’t fully analyze. But the very least
reading the processor id should not get “volatile”.
As for finish_task_switch(), the impact is only few bytes, but still
unnecessary. It appears that with your patch preempt_count() causes multiple
reads of __preempt_count in this code:
if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
"corrupted preempt_count: %s/%d/0x%x\n",
current->comm, current->pid, preempt_count()))
preempt_count_set(FORK_PREEMPT_COUNT);
Again, this is unwarranted, as the preemption count should not be changed in
any interrupt.
next prev parent reply other threads:[~2018-12-07 23:13 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-28 14:01 Matthew Wilcox
2018-12-03 13:59 ` Vlastimil Babka
2018-12-03 16:13 ` Matthew Wilcox
2018-12-03 22:04 ` Nadav Amit
2018-12-03 22:49 ` Matthew Wilcox
2018-12-04 3:12 ` Nadav Amit
2018-12-06 8:28 ` Nadav Amit
2018-12-06 10:25 ` Peter Zijlstra
2018-12-06 11:24 ` Peter Zijlstra
2018-12-06 17:26 ` Nadav Amit
2018-12-07 8:45 ` Peter Zijlstra
2018-12-07 23:12 ` Nadav Amit [this message]
2018-12-08 0:40 ` Should this_cpu_read() be volatile? Nadav Amit
2018-12-08 10:52 ` Peter Zijlstra
2018-12-10 0:57 ` Nadav Amit
2018-12-10 8:55 ` Peter Zijlstra
2018-12-11 17:11 ` Nadav Amit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=C29C792A-3F47-482D-B0D8-99EABEDF8882@gmail.com \
--to=nadav.amit@gmail.com \
--cc=linux-mm@kvack.org \
--cc=peterz@infradead.org \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox