From: Alexei Starovoitov <alexei.starovoitov@gmail.com>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: bpf <bpf@vger.kernel.org>, Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andrii@kernel.org>,
Martin KaFai Lau <martin.lau@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Peter Zijlstra <peterz@infradead.org>,
Vlastimil Babka <vbabka@suse.cz>,
Sebastian Sewior <bigeasy@linutronix.de>,
Steven Rostedt <rostedt@goodmis.org>,
Michal Hocko <mhocko@suse.com>,
Shakeel Butt <shakeel.butt@linux.dev>,
linux-mm <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>,
Johannes Weiner <hannes@cmpxchg.org>
Subject: Re: [GIT PULL] Introduce try_alloc_pages for 6.15
Date: Sun, 30 Mar 2025 17:33:27 -0700 [thread overview]
Message-ID: <CAADnVQKyLod8gNz-RR2=bs=vJJWiGhZ5GB4t68aNPNWndptr0w@mail.gmail.com> (raw)
In-Reply-To: <CAHk-=wgpYOGdQ+f62nbAB4xKLRbxnuJD+2uPBmRzSWCo5XkEGA@mail.gmail.com>
On Sun, Mar 30, 2025 at 3:08 PM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> On Sun, 30 Mar 2025 at 14:30, Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > But to avoid being finger pointed, I'll switch to checking alloc_flags
> > first. It does seem a better trade off to avoid cache bouncing because
> > of 2nd cmpxchg. Though when I wrote it this way I convinced myself and
> > others that it's faster to do trylock first to avoid branch misprediction.
>
> Yes, the really hot paths (ie core locking) do the "trylock -> read
> spinning" for that reason. Then for the normal case, _only_ the
> trylock is in the path, and that's the best of both worlds.
>
> And in practice, the "do two compare-and-exchange" operations actually
> does work fine, because the cacheline will generally be sticky enough
> that you don't actually get many extra cachline bouncing.
Right, but I also realized that in the contended case there is
an unnecessary irq save/restore pair.
Posted the fix:
https://lore.kernel.org/bpf/20250331002809.94758-1-alexei.starovoitov@gmail.com/
maybe apply directly?
I'll send the renaming fix once we converge on a good name.
next prev parent reply other threads:[~2025-03-31 15:49 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-27 14:51 Alexei Starovoitov
2025-03-30 20:42 ` Linus Torvalds
2025-03-30 20:56 ` Linus Torvalds
2025-03-30 21:49 ` Alexei Starovoitov
2025-03-31 7:14 ` Sebastian Sewior
2025-03-31 9:59 ` Vlastimil Babka
2025-03-31 15:35 ` Linus Torvalds
2025-04-01 0:57 ` Alexei Starovoitov
2025-03-30 21:30 ` Alexei Starovoitov
2025-03-30 22:08 ` Linus Torvalds
2025-03-31 0:33 ` Alexei Starovoitov [this message]
2025-03-31 13:11 ` Vlastimil Babka
2025-03-31 14:59 ` Johannes Weiner
2025-03-30 21:05 ` pr-tracker-bot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAADnVQKyLod8gNz-RR2=bs=vJJWiGhZ5GB4t68aNPNWndptr0w@mail.gmail.com' \
--to=alexei.starovoitov@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=andrii@kernel.org \
--cc=bigeasy@linutronix.de \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=martin.lau@kernel.org \
--cc=mhocko@suse.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=shakeel.butt@linux.dev \
--cc=torvalds@linux-foundation.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox