.jpg)
At CES, intoPIX used a PS5-based “wireless gaming” demo to let visitors feel compression first. Beer was served: Belgian Trappistes Rochefort 8, brewed at the Trappist monastery brewery in Rochefort, Wallonia.
What intoPIX delivered at CES was a simple reframing: compression is not “a technology that throws away image quality,” but a condition of the data-explosion era. As camera counts rise, the bottleneck is moving beyond the link and into the SoC itself. Compression is returning as a core system-design discipline - one that simultaneously addresses bandwidth, power, EMI, memory, and cloud cost. The on-site message - “we’re already doing it” - suggested that in automotive, compression can shift from a taboo to a standard.
By Sang Min Han _ han@autoelectronics.co.kr
한글로보기
A direct hit on the industry’s “bottleneck”
Meeting intoPIX at CES immediately went straight for the automotive industry’s bottlenecks. Pascal Pellegrin, CTO of the Automotive Group, Ben Runyan, North America Director, and Jungmin Joo, Head of intoPIX Korea, opened a conversation about compression - something the industry has long avoided.
intoPIX is a Belgium-based company that develops low-latency image and video compression codec IP. In automotive, its core weapons are broadly twofold:
TicoXS, an ultra-low-latency video transport codec used as the baseline for the JPEG XS international standard (ISO), and
TicoRAW, a lightweight compression technology for RAW sensor data (Bayer/CFA).
In other words: as sensor and video data grow, intoPIX offers compression that keeps high quality while adding almost no latency - delivered both as IP (hardware cores) and software libraries.
Compression has long been taboo in automotive. The prevailing belief was that “compression means loss, and latency makes it unusable for many applications.” intoPIX aims to flip that taboo by changing the conditions.
“Our technology is an image and video compression engine. In automotive, compression wasn’t adopted at all. People believed compression means loss and latency. But intoPIX codecs are basically visually lossless, and the total latency - encoding plus decoding - is at the microsecond level. You can think of it as almost nothing.”
What made this sound like more than marketing was the realism of intoPIX’s go-to-market sequence in automotive. Instead of pushing in-cabin real-time streaming first, they started by solving data cost and operations.
Not “autonomy,” but “data cost”
The first place intoPIX enters automotive is data logging and the cloud. And the first word they brought up wasn’t “learning” - it was “cost.”
“You capture images for data logs, upload them to the cloud for training, and store them - but the data is enormous. If you don’t compress it, the cloud storage cost becomes huge.”
ADAS/autonomous performance ultimately depends on how much data you can collect - how often, how long, and how widely you can iterate. But as data grows, upload and storage costs compound, and model improvement hits a budget wall. intoPIX tries to frame this as a problem of operational rationality.
Another consistent emphasis was implementation without being locked to a specific chip - a push for flexibility and broad compatibility.
“We’re very flexible. You can compress and decode not only in hardware but also easily in software. It works on x86, ARM, and GPU-based Jetson environments - so it’s easy to implement across today’s automotive ecosystem.”
The key point isn’t merely “it runs anywhere,” but that compression can function as an operational tool across the entire pipeline: in-vehicle storage → upload → cloud training → reuse. That’s why the company’s first message was not “video transport technology,” but the cost structure of the data pipeline.
.jpg)
ADAS data becomes a cost the moment it is accumulated. In-vehicle storage capacity, upload time, and cloud retention costs elevate compression into an operational technology (top). Camera count growth quickly turns into a “physics of connectivity” problem - bandwidth, wiring, power, heat, EMI, and then the SoC interface/memory bottleneck demand compression.
Sensor-side compression:
less bandwidth means less power and less EMI
If data logging/cloud is the first gateway, the next stage is the sensor.
Cars now carry more cameras, and that is a burden. More cameras increase link bandwidth, and higher bandwidth pulls up power and EMI.
“As more cameras get attached, power consumption gets too high, and EMI issues become serious. To cover EMI, you end up needing expensive cables and shielding. If you apply compression first at the camera, the data volume on the transport segment drops, link bandwidth goes down - and power, EMI, and even the SoC’s internal processing load all decrease in a chain reaction.”
The compression intoPIX describes is not “throw away quality to reduce size.” The moment you reduce data rate at the sensor, wiring, shielding, heat, and power all shift at once. Compression is becoming less about “performance” and more about fundamental system design.
The real bottleneck crosses the link and enters the SoC
This is the most interesting part: intoPIX doesn’t stop the story at the link bottleneck. Their point is that as camera counts grow, the bottleneck inevitably moves inside the SoC.
“IVI and autonomous-driving chipset vendors ultimately take camera inputs into interfaces like MIPI on the SoC. But you can’t increase MIPI lanes indefinitely. So when you try to connect many cameras within limited lanes, you need a compression codec.”
As cameras multiply, the cables scream first - then the SoC. With limited interface lanes and increasing input volume, memory bandwidth becomes the next wall.
“Even when compressed data enters the SoC and gets processed, if the bandwidth is too large, memory read/write bandwidth becomes excessive and creates problems.”
This is the real production reality for OEMs and Tier 1s. In the past, the philosophy of “uncompressed is better” dominated. But as camera-centric architectures scale up, that philosophy collides with the hard limits of bandwidth, power, EMI, and memory. Compression returns to the table - not as a preference, but as a requirement.
In other words: “We didn’t even talk about compression before, but now that cameras have increased, we have to.”
.jpg)
The chain reaction enabled by in-sensor compression: lowering power, EMI, bandwidth, and SoC bottlenecks together.
Proven at CES
The conversation briefly shifted to TVs and gaming.
On site, intoPIX used a “wireless gaming” demo to make compression tangible. A Sony PS5 was running under the booth, and video was transmitted to a TV without a cable - over a 60GHz wireless link or Wi-Fi 7. According to their explanation, an uncompressed 4K signal could require bandwidth up to about 12Gbps, but the demo compressed it down to around 1Gbps for transmission.
They also showcased a “super concealment” technique that makes visual degradation less noticeable even when frames partially drop - treating perceived quality in wireless environments from an operations perspective. This aligns with the same context as LG’s wireless TV highlight at CES. In the past, latency made high-quality wireless gaming difficult; now it’s becoming feasible.
Between “we don’t compress” and the reality that “we must”
What kept repeating on site felt almost ironic.
There is still a camp that builds its concept around “we send without compression.” That philosophy has clear benefits. But the collision with reality isn’t about slogans - it’s about the simple physical condition of rising camera counts. The moment sensors increase and data grows, the burden rises in a chain: link bandwidth, power, EMI, and then SoC-internal bandwidth.
When asked, “Since when did this become such a big issue?” the answer was blunt:
“About 2 - 3 years ago.”
In the cockpit, data has already become too big - and it will grow further. Cameras are increasing, and bottlenecks are moving beyond the link into the SoC. That’s why compression is returning not as “loss,” but as a system language: bandwidth, power, EMI, memory, and cloud cost.
.jpg)
AEM(오토모티브일렉트로닉스매거진)
<저작권자 © AEM. 무단전재 및 재배포 금지>