From: Dave Hansen <dave.hansen@intel.com>
To: Jiadong Sun <sunjiadong.lff@bytedance.com>,
luto@kernel.org, juri.lelli@redhat.com,
vincent.guittot@linaro.org, akpm@linux-foundation.org
Cc: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com,
bp@alien8.de, dave.hansen@linux.intel.com,
viro@zeniv.linux.org.uk, linux-kernel@vger.kernel.org,
linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org, duanxiongchun@bytedance.com,
yinhongbo@bytedance.com, dengliang.1214@bytedance.com,
xieyongji@bytedance.com, chaiwen.cc@bytedance.com,
songmuchun@bytedance.com, yuanzhu@bytedance.com
Subject: Re: [RFC] optimize cost of inter-process communication
Date: Wed, 30 Apr 2025 07:03:12 -0700 [thread overview]
Message-ID: <b22117bf-6b2c-4a98-8a40-48163c1e25d9@intel.com> (raw)
In-Reply-To: <CAP2HCOmAkRVTci0ObtyW=3v6GFOrt9zCn2NwLUbZ+Di49xkBiw@mail.gmail.com>
On 4/30/25 02:16, Jiadong Sun wrote:
> To attain the first objective, processes that use RPAL share the same
> virtual address space. So one process can access another's data directly
> via a data pointer. This means data can be transferred from one process
> to another with just one copy operation.
It's a neat idea and it is impressive that you got it running at all.
But it's a *HUGE* change in the process model and it's obviously not
generally applicable. You literally don't have small processes any more.
You only have big ones that are *VERY* expensive to tear down.
> RPAL is currently implemented on the Linux v5.15 kernel
Hmmm: "This branch is 196946 commits ahead of, 17734 commits behind
5.4.143-velinux."
So this isn't even on top of a stable kernel? It's 10,000 lines on top
of a ~200k commit fork? Yeah, I can see how it would take some
substantial effort to rebase it to mainline. It's also a _bit_ of a
stretch to call this a v5.15 kernel.
Basically, I don't doubt that this is good for _you_ and your
applications. But would anybody else ever use it? I seriously doubt it.
It's too big of a change in model and it has too many compromises in its
design. It's fundamentally not aligned with how the kernel evolves both
in ts design and its development process.
Unless something big changes (like a lot of users suddenly dying for
this functionality), this isn't even something I'd remotely consider
spending any time on looking at again. Sorry.
prev parent reply other threads:[~2025-04-30 14:03 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-30 9:16 Jiadong Sun
2025-04-30 10:30 ` Lorenzo Stoakes
2025-04-30 14:03 ` Dave Hansen [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b22117bf-6b2c-4a98-8a40-48163c1e25d9@intel.com \
--to=dave.hansen@intel.com \
--cc=akpm@linux-foundation.org \
--cc=bp@alien8.de \
--cc=chaiwen.cc@bytedance.com \
--cc=dave.hansen@linux.intel.com \
--cc=dengliang.1214@bytedance.com \
--cc=duanxiongchun@bytedance.com \
--cc=juri.lelli@redhat.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=luto@kernel.org \
--cc=mingo@redhat.com \
--cc=songmuchun@bytedance.com \
--cc=sunjiadong.lff@bytedance.com \
--cc=tglx@linutronix.de \
--cc=vincent.guittot@linaro.org \
--cc=viro@zeniv.linux.org.uk \
--cc=x86@kernel.org \
--cc=xieyongji@bytedance.com \
--cc=yinhongbo@bytedance.com \
--cc=yuanzhu@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox