From: Lars-Peter Clausen <lars@metafoo.de>
To: Laurent Pinchart <laurent.pinchart@ideasonboard.com>,
ksummit-discuss@lists.linuxfoundation.org
Cc: Zhang Rui <rui.zhang@intel.com>, Rob Herring <robh+dt@kernel.org>
Subject: Re: [Ksummit-discuss] [TECH TOPIC] Sensors and similar - subsystem interactions, divisions, bindings etc.
Date: Thu, 28 Jul 2016 20:53:26 +0200 [thread overview]
Message-ID: <579A54A6.5060303@metafoo.de> (raw)
In-Reply-To: <2331474.UchXKu7jRM@avalon>
On 07/28/2016 06:39 PM, Laurent Pinchart wrote:
[...]
>> I think we have only a small amount of fuzz around the v4l boundary,
>> but wanted to leave the door open if anyone wants to discuss that
>> one further as it's come up a few times over recent years.
>
> Don't forget to take system integration into account. If I given you a high-
> speed ADC you will not think about V4L2 as your subsystem of choice. If the
> system designer has connected that ADC to a CPLD that generates fake
> horizontal and vertical sync signals, and connected the output to the camera
> interface of the SoC, you will be left with no choice but use the V4L2 API.
> That's largely a userspace issue in this case, but it implies that V4L2 need
> to define an "image format" for the ADC data.
I think this hits the core of this discussion. Todays hardware is
getting more and more generic. It is a lot more economical to produce a
single general purpose high-volume device than a handful of low or
medium volume specialized devices. Even if the raw production cost of
the general purpose part will be higher (since it contains more logic)
the overall per unit price will be lower since the per part contribution
of the one-time design cost is lower in a high volume run.
So new hardware often tends to be general purpose and can be used in
many different applications.
But our kernel frameworks are designed around application specific tasks.
* ALSA is for audio data capture/playback
* V4L2 is for video data capture/playback
* DRM is for video display
* IIO is for sensor data capture/playback
When you capture data over a particular interface there is a specific
meaning associated to the data rather than the data just being data,
which is how the hardware might see it.
On the kernel side we have started to address this by having generic
frameworks like DMAengine. I've used the same DMA core with the same
DMAengine driver exposed to userspace for all four different types
listed above depending on the application.
This works as long as you know that your hardware is generic and you
design the driver to be generic. But it breaks if your hardware has a
primary function that is application specific.
E.g. a CSI-2 receiver will most likely receive video data, so we write a
V4L2 driver for it. A I2S receiver will most likely receive audio data,
so we write a ALSA driver for it. But now somebody might decide to hook
up a gyroscope to any of these interfaces because that might be the best
way to feed data into the particular SoC used in that system. And than
things start to fall apart.
And this is not just hypothetical I've seen questions how to make this
work repeatedly. And I also expect that in a time constraint environment
people will go ahead with a custom solution where they capture audio
data through V4L2 and just ignore all data type hints V4L2 provides and
re-interprets the data since their specialized application knows what
the data layout looks like.
A similar issue is that there are quite a few pieces of hardware that
are multi-use. E.g. general purpose serial data cores that support SPI,
I2S, and similar. At the moment we have to write two different drivers
for them using compatible strings to decide which function they should have.
So going forward we might have to address this by creating a more
generic interface that allows us to exchange data between a peripheral
and a application without assigning any kind of meaning to the data
itself. And then have that meaning provided through side channels. E.g.
a V4L2 device could say this over there is my data capture device and
the data layout is the following. Similar for the other frameworks that
allow capture/playback.
With vb2 (former V4L2 buffer handling code) now being independent from
the V4L2 framework this might be a prime candidate as a starting point.
I've been meaning to re-write the IIO DMA buffer code on top of vb2 to
reduce the amount of custom code.
This of course would be a very grand task and maybe we'll loose
ourselves in endless discussions about the details and all the corner
cases that need to be considered. But if we want to find a solution that
keeps up with the current trend that the hardware landscape seems to be
going in we might have no other choice. Otherwise I'd say it is
inevitable that we see more and more hardware which has multiple
drivers, each driver handling a different type of application.
Such a grand unified media framework would also help for applications
where multiple streams of different data types need to be synchronized
e.g. audio and video.
- Lars
next prev parent reply other threads:[~2016-07-28 18:54 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-20 21:18 Jonathan Cameron
2016-07-21 7:39 ` Hans Verkuil
2016-07-22 19:37 ` Jonathan Cameron
2016-07-28 16:50 ` Laurent Pinchart
2016-07-21 19:10 ` Mark Brown
2016-07-22 3:29 ` Guenter Roeck
2016-07-22 4:18 ` Torokhov
2016-07-22 19:01 ` Jonathan Cameron
2016-07-22 10:21 ` Mark Brown
2016-07-22 19:31 ` Jonathan Cameron
2016-07-23 2:29 ` Sebastian Reichel
2016-07-28 21:30 ` Lars-Peter Clausen
2016-07-28 22:39 ` Jonathan Cameron
2016-07-29 0:56 ` Guenter Roeck
2016-07-29 5:54 ` Jonathan Cameron
2016-07-22 12:04 ` Linus Walleij
2016-07-22 19:22 ` Jonathan Cameron
2016-07-28 16:46 ` Laurent Pinchart
2016-07-28 18:08 ` Lars-Peter Clausen
2016-08-02 19:55 ` Linus Walleij
2016-07-28 22:07 ` Jonathan Cameron
2016-08-02 19:50 ` Linus Walleij
2016-07-27 3:12 ` Vinod Koul
2016-07-28 11:58 ` Mauro Carvalho Chehab
2016-07-28 16:42 ` Laurent Pinchart
2016-07-28 22:09 ` Jonathan Cameron
2016-08-01 11:03 ` Laurent Pinchart
2016-07-29 7:28 ` Hans Verkuil
2016-08-01 11:47 ` Laurent Pinchart
2016-07-28 22:32 ` Jonathan Cameron
2016-07-28 16:39 ` Laurent Pinchart
2016-07-28 18:53 ` Lars-Peter Clausen [this message]
2016-07-28 19:46 ` Mark Brown
2016-07-31 17:47 ` Vinod Koul
2016-08-01 12:14 ` Laurent Pinchart
2016-07-28 22:26 ` Jonathan Cameron
2016-07-29 7:36 ` Hans Verkuil
2016-08-01 11:19 ` Laurent Pinchart
2016-07-28 19:12 ` Lars-Peter Clausen
2016-07-28 23:38 ` Alexandre Belloni
2016-07-29 6:04 ` Jonathan Cameron
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=579A54A6.5060303@metafoo.de \
--to=lars@metafoo.de \
--cc=ksummit-discuss@lists.linuxfoundation.org \
--cc=laurent.pinchart@ideasonboard.com \
--cc=robh+dt@kernel.org \
--cc=rui.zhang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox