From: Yeoreum Yun <yeoreum.yun@arm.com>
To: Dave Hansen <dave.hansen@intel.com>
Cc: lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org,
bpf@vger.kernel.org, catalin.marinas@arm.com, david@kernel.org,
ryan.roberts@arm.com, kevin.brodsky@arm.com,
sebastian.osterlund@intel.com, dave.hansen@linux.intel.com,
rick.p.edgecombe@intel.com
Subject: Re: [LSF/MM/BPF TOPIC] eBPF isolation with pkeys
Date: Mon, 16 Feb 2026 09:57:52 +0000 [thread overview]
Message-ID: <aZLqIOsHeAUfFjXJ@e129823.arm.com> (raw)
In-Reply-To: <d6d37293-592f-4109-90e9-20b77b023476@intel.com>
Hi Dave,
> BTW, one high-level thing: what you're talking about here really is a
> kind of hardening or kernel self-protection. It's in the spirit of Kees
> Cook's work: how can kernel security be resilient even in the face of
> kernel bugs and attackers exploiting those bugs?
>
> Those who buy into the idea of Kees's work will likely agree with the
> premise of this patch set. Those that don't, won't. :)
Yes exactly. But one of my small wish is most people view this positively...
>
> On 2/12/26 09:14, Yeoreum Yun wrote:
> >>> To that end, this discussion introduces a set of new allocator APIs and
> >>> explores more extensible API designs:
> >>>
> >>> - kmalloc_pkey series
> >>> - vmalloc_pkey series
> >>> - alloc_percpu_pkey series
> >>
> >> It all sounds fun, but this doesn't exactly seem very generic. The memory
> >> that sched_ext needs to access is super different from, say, what a
> >> socket-filtering eBPF program would need.
> >>
> >> So this doesn't seem to be likely to be true "eBPF isolation" as much as
> >> sched_ext+eBPP isolation.
> >
> > Our current isolation model focuses on restricting writes and execution.
> > Therefore, if we allocate only the memory that eBPF programs must write
> > directly with a separate pkey (e.g., packet data or sock),
> > it seems to me that socket-filtering programs could also benefit from
> > the same isolation.
> This means that subsystems using eBPF need to allocate their data
> structures separately, or at least in a pkey-aware manner. They either
> need to declare the memory at allocation time, or need to be able to pay
> the cost (and the collateral damage) of changing its pkey after allocation.
>
> This _might_ be doable for the scheduler. It probably has a limited set
> of things that get written to. Most of it is statically allocated.
>
> Networking isn't my strong suit, but packet memory seems rather
> dynamically allocated and also needs to be written to by eBPF programs.
> I suspect anything that slows packet allocation down by even a few
> cycles is a non-starter.
>
> IMNHO, _any_ approach to solving this problem that start with: we just
> need a new allocator or modification to existing kernel allocators to
> track a new memory type makes it a dead end. Or, best case, a very
> surgical, targeted solution.
TBH, I think there is no difference for a _memory_ usage between
Network packet and scheduler since most of BPF program uses
"MAPs" and this is needed to be written directly by them and
"MAPs" are always allocated dynamically not statically and uses
existing allocators' feature.
IMHO, I think it would be better to make existed memory allocator
aware pkey than make a new allocator since the there is no difference
except pkey-aware with existing allocator (and I think
this would make a more code duplication and add more complexity).
Thus, I would like to discuss with the way to extension of
existing allocators first.
--
Sincerely,
Yeoreum Yun
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
next prev parent reply other threads:[~2026-02-16 9:59 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-12 16:22 Yeoreum Yun
2026-02-12 16:36 ` Dave Hansen
2026-02-12 17:14 ` Yeoreum Yun
2026-02-12 18:14 ` Dave Hansen
2026-02-16 9:57 ` Yeoreum Yun [this message]
2026-02-12 17:44 ` Alexei Starovoitov
2026-02-12 18:01 ` Yeoreum Yun
2026-02-12 18:37 ` Alexei Starovoitov
2026-02-13 10:08 ` Yeoreum Yun
2026-02-13 21:37 ` Alexei Starovoitov
2026-02-16 14:27 ` James Bottomley
2026-02-20 2:50 ` Alexei Starovoitov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aZLqIOsHeAUfFjXJ@e129823.arm.com \
--to=yeoreum.yun@arm.com \
--cc=bpf@vger.kernel.org \
--cc=catalin.marinas@arm.com \
--cc=dave.hansen@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@kernel.org \
--cc=kevin.brodsky@arm.com \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=rick.p.edgecombe@intel.com \
--cc=ryan.roberts@arm.com \
--cc=sebastian.osterlund@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox