tracetronic: Reconnecting the Fragmented Validation Loop Through Automotive DevOps
2026-04-22 / 05월호 지면기사
/ 한상민 기자_han@autoelectronics.co.kr
Kohser added, “Ultimately, the companies that succeed are those that can operate the full loop from requirements all the way to customer feedback.”
INTERVIEW
Christian Kohser
VP of tracetronic
As companies invest heavily in the transition to SDVs, many engineers on the ground are still struggling with delayed test results, unclear bug reports, repeated re-testing, and heavy manual coordination. According to Christian Kohser, Vice President at tracetronic, the root cause is not the performance of individual tools, but the fragmented validation flow between tools and teams. This interview reframes validation, not as a downstream checkpoint, but as an operational backbone that sustains the entire SDV development process. Competitiveness in the SDV era is not driven solely by individual engineering effort.
It ultimately depends on how leadership redesigns and manages the validation structure.
By Sang Min Han _han@autoelectronics.co.kr
Viewing Validation Not as Execution, but as Industrialization
What is the key difference that sets tracetronic apart from traditional test tool vendors? And why do you describe one:cx as an Automotive DevOps platform?
Kohser The key difference is quite clear. While traditional vendors focus on supporting test execution, tracetronic focuses on industrializing validation itself. Most test tools or in-house solutions are designed to optimize specific tasks, tools, or lab activities. However, the challenge OEMs face in the SDV era is much more structural.
In my view, time is not primarily lost during test execution. It is lost across fragmented workflows and handover points. Time is wasted when results are handed over between teams, when data is collected and interpreted, and when feedback is sent back to developers or prepared for decision-making. one:cx was built to connect exactly this fragmented flow.
Our company was founded in 2004 through a project with BMW, during a transition in ECU network architecture for the 7th Series. What we observed repeatedly was that the biggest losses in time and quality did not occur in testing itself, but where the overall loop from requirements through development to execution, results, and follow-up was broken. SiL, HiL, and vehicle testing are simply different levels within that loop. What matters is connecting them into one continuous validation flow.
This is also why we describe one:cx as an Automotive DevOps platform. If software development speed increases without validation keeping pace, it only leads to downstream instability. The key is not execution, but orchestration making validation work across the organization. In other words, it’s not about running more tests, but about making validation work across the organization.
Why DevOps and orchestration are no longer optional in the SDV era?
Kohser It is not that development environments have simply become unmanageable. The more important shift is that the entire loop where customer value is created must now remain intact from end to end.
In the past, requirements, development, integration, validation, release, and customer feedback could be treated as separate stages. Today, all of these are tightly connected. If that loop is not continuously managed, the organization becomes slow exactly where the market is moving fastest.
The bottleneck is rarely inside individual teams. It typically occurs between teams, domains, integration stages, and test environments. That is where silos and manual coordination appear, creating what we call integration debt.
DevOps is not just about CI/CD or faster deployment. It is about end-to-end ownership. What starts as a requirement must flow continuously through development, integration, validation, release, and back to customer value. If that loop breaks, software development speed turns into market delay. If it holds, software becomes a competitive advantage.
one:cx-based DevOps loop architecture. Development and validation are connected as a continuous flow.
The one:cx architecture integrates fragmented development and validation environments.
Cloud-Native Alone Is Not Enough to Explain Automotive
You mentioned that a cloud-native approach alone is not sufficient for the automotive industry. What is the biggest gap in the Silicon Valley-style software perspective?
Kohser Cloud-native approaches bring speed, flexibility, and scalability. That is valuable. But automotive is not a purely software environment. It is a cyber-physical system involving sensors, actuators, ECUs, vehicle dynamics, real-world environments, safety requirements, and regulations. Silicon Valley companies excel at software development. But automotive requires managing cross-domain complexity, physical behavior, and supply chain variation. Even Silicon Valley companies relied on our solutions when developing automotive systems. Ultimately, everyone must deal with the full software and hardware framework.
The real differentiation in automotive is not about moving code faster. It is about performing deterministic, cross-domain validation in real-world conditions, while building a level of trust that is strong enough for release readiness. That is why cloud-native thinking alone is not sufficient.
Many OEMs are investing in automation and CI/CD yet still experience significant delays. What is the root cause, and what are the earliest warning signs in practice?
Kohser The root cause lies in a disconnected feedback loop. Many CI/CD pipelines execute tests effectively,
but break down when it comes to interpreting results and returning meaningful feedback to developers.
There’s a phrase I often use in practice: if feedback comes eight weeks later, it is already too late. Developers no longer remember what they changed eight weeks ago. They don’t even remember why the code was written that way or what context it was based on. For feedback to be meaningful, results need to come back within minutes or hours after a commit.
The early warning signs are clear. The issue does not usually show up first in KPIs. In practice, it appears earlier, now it becomes increasingly difficult to tell what is going wrong. Automated results continue to come in, but for developers, issues start to pile up without structured feedback. In more practical terms, engineers go through longer triage loops, repeat the same tests, and face increasing manual review work. At a certain point, it becomes difficult to even distinguish whether the issue originates from an existing known issue or from a new identified issue. I describe this as a “loss of clarity.” When automation increases, but decision-making speed does not keep up, it means structural problems have already started.
An integrated validation workflow spanning MiL - SiL - HiL - ViL
What SiL and Continuous Testing Are Changing
What changes first with SiL-based validation and continuous testing?
Kohser The first change is feedback speed to developers. Then validation lead time decreases and only after that does decision quality improve. The sequence matters.
In one project with AMG and Mercedes-Benz, the entire cycle from business logic to build, test execution, and result feedback was completed in 730 seconds, which is less than 13 minutes. This is not just about faster testing. It means developers can review the results and act almost immediately.
I describe this as “a field-centric management approach” in software. Developers can directly see what is happening, where gaps exist, and what needs to be improved. This also changes collaboration. Validation is no longer a downstream phase. Developers take responsibility for quality earlier, while testing teams move toward higher-value roles such as automation enablement, test strategy, coverage design, and release evidence management.
In implementing Continuous Validation, what is the biggest barrier, technology or organization? And how can trust be ensured in large-scale automation systems?
Kohser I do not see these as separate issues. Each organization operates with different levels of technical capability, different workflows, and different levels of domain maturity. In practice, organizations need to progress step by step from manual testing to automation, to standardized test assets, to process automation, and ultimately to managing release evidence.
The issue of trust follows the same pattern. In practice, the greater risk is not automation itself, but the way organizations attempt to manage SDV-scale complexity using disconnected tools, manual handovers, and fragmented validation evidence.
Manual processes may appear safer because they are visible. However, they often conceal inconsistencies, undocumented transitions, and low reproducibility. In contrast, automation with a proper management framework makes execution more transparent, more comparable, and allows release progression to be evaluated against clearly defined criteria. That is what we mean by governed automation.
Ultimately, the key is the speed of the loop. From this perspective, how do you assess the recent pace of Chinese OEMs, which have been moving most aggressively in the global market? Have they already established a new operating model?
Kohser I would frame this less as a comparison between countries and more as a difference in operating speed and feedback-loop design. The biggest difference is not nationality, but loop speed.
Many of the newer Chinese OEMs are far less constrained by legacy systems. Their way of collaborating with suppliers is also fundamentally different. In Germany, it is not uncommon to go through thousands of pages of contracts before even starting a project. In China, by contrast, projects can sometimes begin with a single sheet of paper. There is first an agreement along the lines of “let’s bring a good product to market together,” and then things move forward from there. In Germany, everything tends to be specified in detail, but by the time all of that is written down, the development situation may already have changed. The ability to push updates in two-week sprints, validate them immediately, and iterate again is a major advantage.
If we look at what European or Korean OEMs need to change, it is the handover structure between development, integration, and validation. The priority is not to run more tests, but to integrate more frequently, reduce silos, and create a structure where structured feedback is returned directly to the responsible feature teams.
Kohser noted, “A significant amount of time is not lost in test execution itself,
but in the steps in between handing over results, organizing them, and feeding them back.”
ADAS, Simulation, AI, and What Comes Next
As the industry moves toward ADAS and autonomous driving, how does the validation structure change? What roles should simulation and physical testing each play?
Kohser As ADAS and autonomous driving continue to evolve, validation can no longer remain at the level of pass/fail checks for individual functions. It must now evaluate how robustly and consistently the overall system behaves across a wide range of scenarios. This is where the real challenge begins. As the number of scenarios and edge cases that need to be validated grows explosively, physical testing alone can no longer keep up. That is why simulation is no longer optional and it becomes essential.
Simulation provides speed, scale, and reproducibility. It allows the same conditions to be repeated and enables thousands or even tens of thousands of scenarios to be validated in parallel, making it possible to identify issues much earlier in the development process. This is especially true for software integration and regression testing, where simulation delivers overwhelming efficiency. However, this does not mean that physical testing is reduced. On the contrary, its role becomes clearer. Physical testing is the stage for final confidence. Whether the system behaves correctly in the real world such as sensor behavior, vehicle dynamics, and interaction with real environments can ultimately only be confirmed through physical validation.
In the end, the key is not choosing between the two, but defining their roles. Simulation should take on broad learning and fast feedback, while physical testing should focus on final validation and confidence. If this structure is not properly established, development speed may increase, but delays in validation will continue to repeat.
As AI and agentic testing continue to expand, what ultimately determines success or failure in the SDV era? And what is the first thing engineers and decision-makers in Korea need to change?
Kohser AI has already begun to play a significant role in testing. It clearly delivers value in areas such as test generation, defect classification, anomaly detection, result summarization, and root-cause assistance. In practice, we are building systems that can generate test specifications from natural language requirements, construct test steps, and cluster results so that engineers can immediately see what to focus on first. These changes are felt very directly in the field. In one case, test pass rates improved from below 50% to 74% in only ten weeks, and one engineer even described it as “getting my life back”, while not getting stuck in unclear data. It is not that there is simply more data and it is that the data has finally become actionable.
Ultimately, AI is less about replacing validation and more about organizing results and showing what needs to be looked at first. However, this does not mean that AI has reached a stage where it can take over all decision-making. Especially in areas such as release decisions and functional safety, human accountability remains essential. Combining standards, norms and the human expertise with AI is our core strategy.
In the end, competitiveness in the SDV era is determined elsewhere. It is not about the ability to write more code, but about how fast and seamlessly the entire loop from requirements to development, validation, OTA, and customer feedback can be operated. If this loop is fast, software becomes a competitive advantage. If it is slow, even significant investments will fail to translate into market outcomes.
The most important change for engineers and decision-makers in Korea lies here as well. They need to move away from the mindset of treating validation as a downstream phase after development. Validation must be treated as a continuous function that runs alongside development. The current structure is simply too expensive. A model that requires several million euros per platform is not sustainable in the long term. This is not a problem that can be solved by having a few engineers work harder. The validation loop and the operating model itself need to be redesigned.
AEM(오토모티브일렉트로닉스매거진)
<저작권자 © AEM. 무단전재 및 재배포 금지>