What the CES 2026 Connect2Car panel really left behind: trust, validation, revenue, and collaboration
.jpg)
The message that the CES 2026 Connect2Car panel delivered about Automotive AI was not that AI is simply “getting smarter.” It was that the real issue is how the industry will use it - safely, quickly, and in a way that actually works in the real world. Without exaggerating the promise of the technology, the panelists left behind the most practical set of questions the automotive industry must answer if AI is to become not just a concept, but a product - and an execution plan.
By | Sang Min Han _ han@autoelectronics.co.kr
한글로보기
Moderator: John Ellis, President of Codethink Limited
Panelists: Yansong Chen, Founder & CEO of ROPIX LLC
Rebecca Delgado, VP Engineering, Autonomy & AI of Torc Robotics
Maurice Dantzler, Executive Director, Cummins Inc.
Madison Beebe, Director of Software Portfolio & Technology Strategy, Ford
Dirk Slama, Chairman of digital.auto and Bosch
“AI isn’t a feature problem. It’s an execution problem.”
CES has countless stages where people talk about AI. But what this panel left behind wasn’t the usual question - what can AI do?
It was something more uncomfortable and more real: when we actually try to use AI, what questions do companies and organizations immediately run into?
Can it be trusted? Can it generate revenue? Who is responsible? How do we validate it?
And the most important question of all: can we put this inside the production and operational timetable - without breaking everything else?
Framed as “a conversation among friends,” almost like a discussion at a bar, the panel made one point surprisingly clear: AI in automotive is no longer a technology debate. It’s a debate about alignment - and execution.
The conversation unfolded through four themes:
- AI must become a product, not a trend - trust, revenue, and the reality of “productizing.”
- AI is a tool, not a miracle - human-in-the-loop responsibility and honest data.
- Complexity in the era of centralization and SDV - integration, validation, and the harsher realities of safety and cybersecurity.
- In the end, what remains is execution - problems that cannot be solved inside one company must be solved through an ecosystem, standardization, and a social contract of collaboration.
The conclusion is simple:
Key Takeaways
- AI is not a “feature.” It’s a productization challenge. It’s not enough to build a model; you must design the trust, revenue, and operations around it.
Human-in-the-loop is the core principle. AI replaces repetitive work, not responsibility. Accountability remains human and organizational.
- Data isn’t fuel - it’s a risk asset. If data quality, integrity, security, and privacy fail, AI becomes a liability instead of a breakthrough.
- Centralization doesn’t simplify; it concentrates validation burden. Once AI sits on top of SDV, integration, safety, security, and diagnostics become even more demanding.
- AI succeeds or fails in execution, not algorithms. Real failures usually come from schedules, validation, and collaboration breakdowns - and the costs are massive.
- The answer is collaboration, standardization, and an ecosystem “social contract.” Internal optimization is not enough; timelines, validation, and responsibility must be aligned across players.
AI Isn’t About “Possibility.” It’s About Productization
John Ellis: Today’s theme is Automotive AI - unlocking new possibilities and experiences. Rebecca, you’re working on autonomy applications at Torc Robotics. Is AI helpful or not?
Rebecca Delgado: It’s not just helpful - it’s foundational. It’s the core of what we’re bringing to market. And it’s not a fad. It’s a real system that enables what the market needs - accelerated experiences and capability, especially in truck autonomy. AI is transforming the world and will continue to do so.
But there are plenty of challenges. We need trust. We need revenue. And we need to productize it. There are so many challenges, and they’re all orthogonal to each other.
John Ellis: Yansong - this is what you and I always say: AI is just another tool, and we can’t forget the fundamentals. If we want to achieve what Rebecca outlined - trust, revenue, productization - what are the real challenges?
Yansong Chen: As the only founder on this panel, and one of only two entrepreneurs here, I’ll bring a different perspective. When AI meets the reality of automotive development - especially in system-level disciplines like functional safety - it cannot replace the fundamentals.
Discipline, rigor, systems thinking - those still come from humans and human teams. AI is only as good as the information you put into it, which means we still own the responsibility.
Rebecca mentioned trust. I would start by saying: we need to trust ourselves first.
John Ellis: Madison - what are the challenges for a traditional OEM? Software complexity is exploding, scaling beyond the physical world, tools and processes are aging… what do we need to do?
Madison Beebe: I liked the comment that it isn’t a fad - and the emphasis on trust. We can’t approach AI like a trend. We need to embrace it and invest in it, but smartly. And it has to span the entire development cycle and product lifecycle.
The first challenge is people. How do engineers interpret AI? Is it going to take their jobs, or does it give them efficiency and scale? Data integrity is key - AI is only as good as what you feed it. So operationally and culturally, people and data are huge challenges.
The second challenge is knowing when to invest, how to invest, and making sure the investment is targeted toward revenue-generating opportunities.
I’ve been looking at AI across development workflows - efficiency, velocity, and quality of outcomes. And what’s unique about AI in automotive is the opportunity to better understand the customer and deliver solutions tailored to individuals, companies, or operators.
Complexity in the Era of Centralization: Safety, Security, and Validation Get Sharper
John Ellis: Dirk - through the Bosch lens and ecosystem partnerships, what are the challenges the ecosystem will face? Partnerships were built around physical-world rules. Now we’re in a digital world with different rules. Can AI help - or does it become a burden?
Dirk Slama: That’s a very important question. Let me start with a personal experience. We’ve been talking about coding and safe coding. Honestly, I haven’t coded for 10 or 15 years - I haven’t been in that zone.
But at the end of last year, I spent three months completely sucked into VS Code. Long nights like in the old days - until my wife told me to stop. It was fun, but it felt like remote-controlling a drunken monkey kung fu fighter.
This monkey is a total expert - but completely drunk.
That’s the challenge. The power is massive, but it’s jagged. Unexpected, uncontrolled results appear.
How do we apply that to our industry and to established value chains - Tier 1s, OEMs? We still have foundational things that must always work in the vehicle. And then we can build additional features on top.
There’s a lot of talk now about agents that understand your emotions inside the vehicle. That’s not something I believe we will develop purely through ISO 26262-style processes.
The challenge is making all of this work together, leveraging the power, without endangering critical systems - both technologically and across the ecosystem.
John Ellis: Maurice - you’ve done safety. You’ve done cybersecurity. Are you freaked out by any of this? And if you’re not, why not?
Maurice Dantzler: I’m not freaked out at all. I see AI as a tool.
I’ve always struggled with one question: how do we help teams manage complexity? Systems are getting more complex, and it’s harder to do everything right in time for production launches. The V-model is great - but how often do we truly have time to do it properly?
AI will help manage data, and help us ask systems-level questions - with the human still in the loop.
And I’ll tell you something: when new technology fails, it’s usually not the technology that failed. It’s execution. We could make it work, but we didn’t have enough time, and we made engineering mistakes that led to delays and extra cost.
AI can bring efficiency, reduce cost, and help check the work that should be done.
John Ellis: Yansong - do you agree? Is it about trusting engineering - and getting management aligned too?
Yansong Chen: This morning in my Uber, I chatted with the driver. She loves technology, but she said she was scared of AI and robots - two big buzzwords of this week.
So I asked: what exactly scares you? She said: it feels like a black box. She can’t see inside it.
And that’s the trust problem. But in the end, she also said: I need to learn the tool to make it useful.
That’s the point. AI is another tool. We already learned how the internet reshaped modern communication. AI will reshape things too. We can’t abandon fundamentals - we have to integrate the tool responsibly.
What Remains Isn’t Technology. It’s Execution: Collaboration and a Social Contract
John Ellis: Madison - Ford is one of the most trusted brands. How do you talk about trust with AI, both internally and to customers?
Madison Beebe: Trust has multiple layers - internal trust and external trust.
Engineers can list fifteen use cases for how AI can accelerate their work. But finance will ask: how do we make the money back?
Trust begins with targeted use cases, targeted investment, and a plan to recoup that investment. You define expectations, optimize toward them - and stop if they don’t work.
My focus has been AI across development processes - efficiency, velocity, and output quality.
From a customer perspective, trust grows when they see improvements over time - vehicles learning, adapting, delivering personalized experiences. But data integrity and privacy are essential.
And honestly, consumers already use AI every day - they just don’t always realize it.
John Ellis: Dirk - trust isn’t just legal or contractual. It’s also social. As an ecosystem representative, what do we need to make that social contract real?
Dirk Slama: Everyone is using AI/ML every day. And right now, it’s all about agents. Agents that book your travel, agents that do engineering tasks.
So I wanted to run a little experiment this morning. Who here has given their email password or credit card details to an AI agent and let it operate independently?
Nobody. That’s the answer.
We still have a long way to go before we fully trust that drunken monkey kung fu expert.
Rebecca Delgado: Let me add something. There’s a saying: when all you have is a hammer, everything looks like a nail. That’s where we are. AI, AI, AI - if you don’t have AI, people think your company has no strategy.
But even in a company where AI is foundational to the product, it all comes down to systems thinking - where you apply AI in development, and where you apply it at runtime.
AI accelerates capability across the value chain. It improves development efficiency and builds better products - and enables product capabilities that would take forever to develop manually.
But trust requires the right people. AI experts must be supported by finance, safety, legal, and open-source communities.
It’s like adding another country to the United Nations - you need to learn a new language. An ISO 26262 expert and an AI model expert don’t speak the same language.
The teams who win will be those who connect these worlds - and restrain themselves from deploying what isn’t ready. Sound systems engineering will win - if there’s a business case.
John Ellis: Then I’ll ask directly: do we have a business case? Automotive has a “shiny keys” problem - always chasing the next shiny thing. So are we chasing something here, or is there a real opportunity? Assume we solved the technical problems. Can we create a sustainable business model beyond just selling cars?
Dirk Slama: I definitely think we have a strong business case. Self-driving was promised ten years ago, and it’s still moving inch by inch. We need to cross the threshold where insurance companies believe it’s safer than the average human.
But the industry has already changed dramatically in the last five to ten years. Development cycles are shrinking - from seven years to less than two. If we don’t embrace this and make it part of the new automotive DNA, we won’t succeed. That’s the business case.
Yansong Chen: Electrification, SDV, and now AI on top. AI will become another strategic roadmap layer, but we’re still early. We’re only two or three years into this cycle since ChatGPT changed the world in 2023.
CES 2024 and 2025 had the same question: is it real AI, or just automation with an AI label?
AI adoption will accelerate, but near-term we’re still targeting low-hanging fruit - efficiency improvements - while electrification and SDV are still struggling with commercialization and profitability.
Maurice Dantzler: In the classical quality-cost-delivery perspective, AI can improve all three at once. Usually there’s a trade-off - better quality means higher cost. But AI, especially in simulation, can open doors to improve quality, cost, and delivery simultaneously.
AI can also help organizations tap into knowledge and experience built over a hundred years.
And with SDV, launches won’t be based only on what’s inside your company. You’ll depend on coordinated schedules, coordinated validation plans across multiple companies. I don’t know how you do that without AI.
AI can get you 75% of the way, and seasoned experts can handle final verification.
John Ellis: We used to focus only on what’s inside. Now we must focus on ecosystems. Relationships aren’t only contractual - they’re social. Execution is where we fail. How do we overcome it?
Maurice Dantzler: First, I’d probably ask AI what failures we’ve experienced and why - and use that to guide how we approach it.
But execution across companies, regulators, safety, cyber - it’s too much. That’s why we need tools.
I remember when people said you shouldn’t use calculators - you should do long division manually. When was the last time anyone did that?
Yansong Chen: I can’t avoid mentioning China. They’re fast in electrification, fast in SDV, and now pivoting rapidly into AI-defined vehicles.
What I want to highlight is: look at what’s already been done, and don’t reinvent the wheel. China drove efficiency through standardization.
In North America, we should learn from what’s working, and come together through places like Connected Car and COVESA.
At CES, COVESA has an AI working group event. Over the last three months, we’ve been gathering companies to share real use cases - not just planning - and how to do it consistently. Standardization and efficiency are key.
Rebecca Delgado: From the autonomy company perspective, this is a marathon - with many sprints inside it. It’s fundamentally a resource problem: compute, talent, money, the cost to generate data, train models, maintain models.
That’s why collaboration is more realistic - universities, research entities, ecosystem partners. Historically, we worked in silos and guarded IP. But now we may need to give up some exclusive ownership to move faster and reduce cost.
It requires long-term investment, clear goals, and a collaboration sweet spot.
Madison Beebe: Exactly. Look at the last five to seven years of SDV. Massive internal investment, proprietary architectures, companies moving in their own way.
Maybe AI shouldn’t repeat that path.
And about the business case - maybe the question isn’t whether there is a business case, but whether it becomes a revenue-generating opportunity. Because by the time it’s production-ready, it may already be table stakes.
Dirk Slama: I really like the point about looking holistically at systems architecture. AI-defined vehicles must be built on top of software-defined vehicles. SDV provides the safe layering - encapsulating the things you really cannot break, and building new features on top.
And this applies to engineering processes too. Testing can be about breaking things. So what better tool than a drunken monkey kung fu expert to stress-test systems and see how far we can push them - safely, step by step?
with Dirk Slama, Vice President, and Rebecca Delgado, Vice President
AI cannot be completed through solo acceleration. Just as SDV demanded standardization and a shared foundation, AI demands that the ecosystem aligns its language, its validation rules, and its responsibility boundaries. Productization is not a purely technical problem - it’s an operational, contractual, and trust problem.
The moment the automotive industry truly “gets AI” will not be when AI becomes smart enough. It will be when the industry becomes aligned enough to handle it.
AEM(오토모티브일렉트로닉스매거진)
<저작권자 © AEM. 무단전재 및 재배포 금지>