How to Eat the SDV Elephant - One Bite at a Time
digital.auto’s Strategy for Redesigning AI and SDV
2025년 07월호 지면기사  / 한상민 기자_han@autoelectronics.co.kr





“The comeback of BTS, Season 3 of Squid Game, and AI-Defined Vehicles.”
These were the three keywords mentioned at the very beginning of the keynote speech delivered by Prof. Dirk Slama, Vice President at Bosch and Chair of digital.auto, at the Automotive Innovation Day 2025. This wasn’t some superficial gesture to charm a Korean audience. At first glance, the three topics may seem unrelated, but surprisingly, they converge into a single, profound question:
In a complex system, how do we orchestrate emotion and technology (engineering experience), human and AI?
BTS, who orchestrated global fan data beyond K-pop with refined precision.
Squid Game, which demands intricate structure and survival strategy within seemingly simple game rules.
And the AI-powered evolution of the Software-Defined Vehicle (SDV), as described by Slama.
All three, in essence, are linked by a shared context: the balance between engineering complex systems and delivering emotional experiences - all in collaboration with AI.
“How do we eat the SDV elephant?”
This is the question Slama posed - and he provided a clear answer: “One bite at a time.”
Millions of vehicle requirements.
An explosion of derivative models.
The asynchronous nature between slow-moving mechatronics and fast-evolving AI.
He proposed to tackle these challenges with three core strategies: Agentic AI, Context Capsule, and Multi-Speed Value Streams.
This keynote was not an abstract forecast about the future.
It was a practical report from the frontline of automotive software development, where new methods of “defining the car” are already underway.
As Slama put it, an SDV co-designed by humans and AI is ultimately a re-orchestration of the entire industry.


Presentation | Prof. Dirk Slama, Chair of digital.auto and Vice President at Bosch
Summary | Sang Min Han _han@autoelectronics.co.kr

한글로보기



 

Vibing



“What has happened in Korea over the past year?”
From what I’ve heard, first of all, BTS has returned from their military service. I heard that many people have been eagerly waiting for their reunion.
And who here has binge-watched Season 3 of Squid Game? Could you raise your hand? Yes, quite a few of you!
Lastly, I think it’s fair to say that over the past four months, everything has been about artificial intelligence (AI). Even here in Korea, I understand that AI has become a national priority. I’ve been told that your newly appointed Deputy Prime Minister for Economic Affairs is now leading the national AI initiative.
Given that this conference is themed around Software-Defined Vehicles (SDVs), I would like to go a step further and talk about what may come next - AI-Defined Vehicles.
Let’s take a look at the AI Hype Cycle. I think we can all agree that over the past two years, Generative AI has been the most significant technological trend. But if we refer to Gartner’s Hype Cycle, we’ll see that AI is now entering the so-called “trough of disillusionment,” where expectations begin to temper.
This aligns with what many of us are experiencing: generative AI still has some challenges to overcome. A prominent issue is hallucination - where the AI generates incorrect or nonsensical outputs. Then there’s the concept of “jagged intelligence”, as described by Andrej Karpathy. Sometimes AI is so smart, it amazes us, but at other times, it makes completely absurd mistakes.
Another key limitation is AI’s lack of long-term memory. The way AI works today, all the knowledge it uses is what has been trained into it ahead of time. When we want to bring in new information during an actual conversation or application, we have to sneak it in through a context window using external tools.
Still, I believe this wave of AI is quite literally reshaping our world - at speed. And I would argue that the field where this transformation is currently most visible is coding.
When we talk about SDVs, ultimately, it all comes down to coding. in fact. And one thing I’ve always felt is that coding, for the most part, has stayed the same. You write out lines of code, building the system piece by piece, and then wrap things up with a bit of final polishing.
The problem arises after that. Let’s say you want to add a new feature, switch the database, or move the system to the cloud. It’s never a seamless experience. It’s always tough, and it takes a lot of time.
But now, thanks to AI, a new concept has emerged - vibing. And this feels completely different from traditional coding. Vibing is like a potter shaping clay with their hands. You have this clay - your code - and you mold it into shape with just a few prompts.
You can say, “Change this,” or “Add this feature,” and the system evolves. Even fundamental architectural changes can be made this way. It feels like magic.
I tried this again just last week. Over the past two years, I’ve been experimenting with various AI coding tools on and off, every few months. To be honest, the experience two years ago was pretty disappointing - the technology just wasn’t mature.
But last week, I had a truly astonishing experience. It was a real Eureka moment!
I was playing around with one of these AI coding tools, almost casually, when I suddenly remembered an example I’ve used for over 30 years. It’s a very simple simulation I used both while learning to code and while teaching others.
This time, I tried working on that same familiar example with the help of AI - and I was blown away by how differently the problem was solved. It completely shook my long-held assumptions that “you have to write every line of code yourself” and “you must design the whole structure manually.”
Let me describe the simulation. It involves fish and sharks. The sharks eat the fish. But if there aren’t enough fish, the sharks starve and die, allowing the fish population to grow again.
It sounds simple, but it’s actually a fairly realistic ecological simulation. I implemented it using just a few prompts - right in the browser - using a modern AI tool like Replit.
With that, I built a living, breathing simulation system. You could visually observe how the populations of fish and sharks changed in a simulated ocean.
The amazing part? The system even auto-generated a few control sliders for me - like one to set how long a shark could survive without eating, or to adjust other survival conditions.
I didn’t write any of this code myself. In just 20 minutes - without a single line of code - I had created a simulation with this level of complexity. That’s the power of Generative AI.
And next, I want to talk about a topic that’s gaining tremendous attention these days: Agentic AI.
If you haven’t yet heard the term Agentic AI, I’d have to ask - have you been living under a rock?





 
The Power of Agentic AI:
Fish, Sharks, and the Magic of UI 



So then, what exactly is Agentic AI?
To me, the answer is quite simple. It’s the kind of AI that, if you handed it a credit card and a laptop, it could go off and accomplish something on its own. In other words, it’s AI capable of autonomous action.
Brilliant thinkers like Andrew Ng have already outlined several patterns of Agentic AI. For example, there’s the planning agent, which breaks problems down into smaller parts and creates a plan to tackle them more effectively. Then there’s the retrieval-augmented agent (RAG), which finds the information necessary to solve a problem. There’s also the tool-using agent, which leverages specific tools to carry out tasks and implement solutions. And the key idea here is that all these agents can work together  - a concept referred to as orchestration  - to complete a job collectively.
Now, what if we applied the concept of Agentic AI to the simulation example I shared earlier?
I was in the middle of building the shark-and-fish simulation using Replit AI when I noticed a small issue in the graph -  the one displaying the population of sharks and fish. In the tooltip that appears when you hover over the graph, the word shark appeared twice. It was clearly a minor UI glitch.
So I simply typed in a prompt:
“Can you fix this small UI bug?”
Now, as many of us know, getting AI to successfully complete the last 10% of a task -  especially when it comes to refining details -  can be quite difficult. This is something that continues to challenge AI systems.
But in this case, astonishingly, it worked incredibly well. The AI recognized the issue and fixed it.
What really amazed me, though, was what came next: the AI took a screenshot of the updated application, analyzed that screenshot, and verified for itself whether the issue had truly been resolved.
Through this sequence, I was able to directly observe how the AI agent created a plan and how its specialized sub-agents executed that plan. Witnessing this entire flow was, honestly, a remarkable experience.
And now, within this flow, a third major theme emerges  -  Context Engineering.
As I mentioned earlier, those of us working with AI have repeatedly encountered challenges, including the issue of memory retention. That’s why, at this point in time, it’s important to recognize that context engineering is rapidly gaining attention, especially in the context of both Generative AI and Agentic AI.
This concept of context engineering goes far beyond what we’ve learned over the past two years about prompt engineering. It’s a more comprehensive approach -  one that includes not only the task-relevant memory, but also past interaction history, and any additional contextual information that may be relevant to the problem at hand.
That’s why I would say this: Context Engineering is the third core concept we should be focusing on.
And now, I’d like to explore how all of this can be applied to Software-Defined Vehicles (SDVs).






 
The Evolution of SDVs
Toward AI-Defined Vehicles



So then, what really happens when SDVs and AI come together?
One thing is certain: when we talk about SDVs, most people will likely respond by saying, “That fish-and-shark simulation you just mentioned? It’s cute, sure, but it doesn’t even come close to the complexity of real-world automotive development.” And they’d be absolutely right. The reality of automotive development is far more complex.
Take, for example, the development of a new vehicle platform. The number of requirements alone can exceed hundreds of thousands -  even more than a million in some cases.
Tracking all of these requirements accurately across the entire V-model lifecycle is, frankly, nearly impossible.
And let’s not forget: we’re still carrying the weight of legacy architectures.
Just this morning, there was a fascinating discussion on where AUTOSAR should go from here.
This remains a critical issue.
But the truly fatal challenge lies elsewhere:
The number of configuration combinations grows exponentially with each new variant.
This level of complexity is something we simply must address.
To make matters worse, we’re still using legacy DevOps toolchains, data remains siloed across departments, and consistency is lacking.
Even the homologation process - vehicle certification -  remains stuck in outdated methods.
So, what’s the answer to all of these problems?






 
The ELC Capsule:
A Container for the Memory and Execution of Requirements



Some engineers might say, “No problem. Let’s just use a knowledge graph.”
But here’s the question we should really be asking:
Are we simply building another enterprise monster?
In other words, are we kicking off yet another multi-year, top-down, monolithic “one-model-solves-all” mega project?
Is that really the solution?
Maybe parts of it can help -  but the core question still stands:
How do we deal with all this extreme complexity?
Or, to put it another way:
How do we eat the SDV elephant?
And the answer, as many of you probably already know, is:
“One bite at a time.”
This takes us back to where we were around this time last year.
What were we talking about back then?
We discussed hardware abstraction layers in SDVs,
and proposed shifting as much code as possible upward -  toward the higher levels of the software stack (also known as “shift north”).
Why? Because upper layers move faster and only require quality assurance,
while the lower layers must be conservative due to real-time constraints and safety concerns.
This is the key distinction between “vibing”  -  the flexible shaping of code like clay - 
and the rigorous, detail-oriented world of real-time safety-critical software.
And here’s what’s important:
Neither of these two extremes is going away.
Yes, AI can help, but it’s not going to magically transform the entire automotive software stack into a “vibing” environment overnight.
This is one of the two main axes of SDV development.
The other half is about how we manage automotive complexity.
To manage this complexity properly,
we need to strike a balance between being agile, incremental, and continuously innovative,
while also upholding the principle of “first time right.”
And here’s something we must acknowledge:
Within the automotive industry, value streams operate at vastly different speeds.
For instance, AI and software are evolving at breakneck speed,
whereas physical systems like mechatronics move at a much slower pace.
Recognizing this disparity is crucial.
That’s why we must adopt an architectural perspective - 
we need architectural layering, loose coupling, and multi-speed delivery models.
Now let’s return to the topic of AI and where these pieces connect.
Remember our earlier discussion on Context Engineering and Memory in AI?
When we zoom out and consider the full picture,
we arrive at a fundamental need:
the ability to glue all these moving parts together.
But the solution shouldn’t be a years-long effort to build some massive enterprise knowledge graph.
Instead, we should focus on immediate, problem-centered, directly applicable engineering approaches,
particularly within systems engineering or product line engineering domains.
What we need are things like Context Capsules,
or what we’ve termed Engineering Lifecycle Capsules (ELCs) - 
compact packages that include the most relevant knowledge and situational context for solving a current problem.
Now let’s bring that mindset back to the tools of Agentic AI.
Suppose we’re tackling a very specific task:
maybe we’re implementing a new feature,
or adding a specific API to the vehicle’s hardware abstraction layer (HAL).
This kind of task can start by engaging a Planning Agent, which breaks the problem into smaller, manageable chunks.
But as many of you in the automotive space know,
implementing a vehicle API is rarely straightforward.
First, we need to determine whether there is a physical foundation in place to support this functionality.
Before any software gets written, we must assess the hardware feasibility.
And this, precisely, is the kind of detail that needs to be embedded in the context capsule for that problem.
Next, we can use RAG (Retrieval-Augmented Generation) agents to access various enterprise data sources  - 
requirement management systems, version conflict systems, DevOps databases, and other internal tools - 
to collect relevant supporting data.
Then, using tool-using agents, we can perform the actual implementation work
based on the retrieved data.







Passenger Welcome Sequence:
Two Diverging Value Streams



Let me now walk you through a more concrete example -  one we’ve been using repeatedly over the years: the Passenger Welcome Sequence. There are two reasons we keep coming back to this example. First, many OEMs are actually developing this very functionality. Second, it’s easy to understand.
The feature refers to the vehicle detecting when the driver is approaching and automatically initiating a welcome sequence.
For example, the door opens automatically, the driver’s preferences are fetched from the cloud, and then the language settings, seat position, HVAC system, and other configurations are adjusted accordingly.
This scenario shows that we ultimately need to break down the entire sequence into detailed steps -  and implement them one by one.
So how can we do this in practice?
This is something that must be handled on a fast-moving value stream.
If this sequence fails once in every ten thousand executions, it’s certainly not ideal - 
but it doesn’t cause critical harm either.
However, if one of the elements in the sequence is opening the door, then that’s a different story. That’s a much more sensitive and mission-critical function.
Still, from a high-level orchestration perspective, the overall passenger welcome sequence is something that can be improved incrementally, and it’s a great candidate for applying agile principles in development.
Now, what do we need in order to build this ELC capsule?
The first step is to understand the requirements.
As you know, we’re dealing with millions of requirements.
So, how do we identify which ones are truly relevant to us?
This is something we experimented with in the digital.auto project.
We tried creating something like a Requirements Radar.
The idea behind this radar is to place the most relevant requirements for the feature being worked on at the center, with less critical ones radiating outward.
Within this framework, AI scans across various systems  -  DOORS, Rhapsody, Polarion, wherever the requirements may be hidden -  and extracts those that are relevant to the functionality in question.
These requirements could be regulatory, functional, or non-functional.
And that’s where the building of the ELC capsule begins.








The next step is actual implementation.
At this stage, we’ve sufficiently understood the requirements behind the passenger welcome sequence -  be it customer journeys, global regulations, or our own design intentions - and we begin to implement.
This implementation phase, too, must be AI-assisted.
Of course, that includes AI-generated code.
But bridging the gap between requirements and actual code still requires additional steps.
In the SDV context, Vehicle APIs play a crucial role in closing this gap.
digital.auto is currently running an open-source project under the Eclipse Foundation.
Recently, this project has been extended to natively embed Agentic AI capabilities.
One tool from this project, the Vehicle API Explorer, is based on the API catalog from COVESA,
and it currently defines over 1,200 vehicle signals at the lowest level.
The next task is to determine which of the requirements we’ve gathered for the passenger welcome sequence are actually connected to available interfaces.
So when you ask the AI:
“Show me the interfaces I need right now,”
it will, of course, suggest interfaces for things like moving the seat or opening the door.
What’s interesting, however, is that the AI also provides additional planning information.
For example, it may indicate on screen that the “moving the seat” API exists and has a “committed” status.
This means that the feature is confirmed to be implemented by the time the vehicle goes into production.
In contrast, for “opening the door”, the API may be present in the vehicle API catalog but marked as “uncommitted.”
This means there’s no definitive guarantee yet that this function will be supported at production launch.
At this point, the AI’s role is to analyze the issue further and help us answer the question:
“What exactly is needed for this API to become committed?”
Take the “open door” API, for instance.
To commit this, we first need to know whether there’s a plan to physically install door motors in the vehicle.
This is closely tied to mechatronics design.
If the decision has been made not to install door motors,
then implementing the door-opening API at the ECU level becomes meaningless.
The more hardware-dependent the function is,
the greater the effort needed to determine whether the API can realistically be implemented.
And here, AI plays yet another role -
by understanding the relationships between people,
dependencies between tasks,
and the workflow within the system,
it can generate dependency maps based on these factors.
And even when we’re happily implementing the passenger welcome sequence,
the AI might intervene and say:
“Hold on - there’s still a critical dependency unresolved for the ‘open door’ feature.”
It reminds us that unless we resolve this issue, the overall system implementation could be compromised.
This sends us back to the requirements analysis phase.
We now need to think about how we can implement this feature safely.
For example, “opening the door” isn’t just a trivial function.
Yes, it’s part of the passenger welcome sequence,
but it has strong hardware dependencies and is likely to be implemented on a slower-moving track.
And now, our next task becomes clear:
How do we map the requirements to implementation approaches,
and how do we design the actual implementation?
At this point, we bring in more code-centric tools -
such as ETAS’s Embedded AI Coder.
These are the types of tools that can help accelerate the development of safety-relevant features.
Assuming we have such tools,
the next question naturally becomes:
“What does this mean from a testing perspective?”
Currently, we have two ELC capsules,
each containing all the data required for two respective tasks.
The next step is to enrich these ELC capsules.
That means incorporating:
Test-specific data
Test cases
Test implementations
Test execution results
Ultimately, all of this must be connected to the global master test plan -
because we need to maintain a consistent and coherent strategy and planning for testing across the entire system.



Pre-Homologation



Finally, as we bring all of this together, there’s one more topic I’d like to address: homologation.
In fact, this is something that needs to begin much earlier than most of us might think - not at the far right end of the V-model, but rather at its very starting point on the left.
In other words, it must be considered from the earliest stages of development.
And so, we once again return to the ELC capsule.
This time, we need to include homologation-related data within it - 
for instance, information about relevant global regulatory frameworks.
To make this possible, digital.auto collaborates with partners such as Certivity.
These companies use AI to scan and analyze the massive body of regulatory information scattered across thousands of PDF documents worldwide.
During this process, AI determines which regulations apply to which systems or functionalities, and it organizes that information accordingly.
This enables us to initiate a process known as homologation pre-assessment.
What this assessment tells us is quite important.
For example, the overall passenger welcome orchestration might fall under a relatively light homologation procedure - 
because it does not constitute a safety-critical function.
However, several individual modules used within that sequence - 
such as the “open door” feature - 
are safety-critical and must therefore undergo a much more complex and stringent homologation process.
Taking these differences into account,
we can then create a plan:
An AI-based planning agent proposes a homologation strategy,
which is then reviewed and refined in collaboration with human experts.
Through this process, we prepare the necessary data.
Based on that information, we can execute and validate systems
along value streams that move at different speeds.
These value streams are part of a multi-speed delivery model.


 
From SDV to AI-Defined Vehicle



If we pull all of this together,
what I wanted to show you today is a snapshot of where we are headed.
We are currently in a transitional phase,
moving from Software-Defined Vehicles (SDVs)
toward what we call the AI-Defined Vehicle.
The core message of today’s presentation is this:
“Don’t try to solve the whole problem all at once.
Instead, break it down - bite by bite - 
within the context of multi-speed development,
and solve each step in alignment with the current capabilities of AI infrastructure.”
To do this, we use the concept of the ELC capsule.
This capsule contains all relevant information - requirements, context, memory - 
and helps us break the work down into digestible bites.
In this way, step by step,
but with persistence,
we can move forward - together with AI.






<저작권자 © AEM. 무단전재 및 재배포, AI학습 이용 금지>


  • 100자평 쓰기
  • 로그인


  • 세미나/교육/전시

TOP