Automotive UX After Peak Display
2026-03-10 / 05월호 지면기사  / 한상민 기자_han@autoelectronics.co.kr


This essay was written by Dr. Peter Rössger, a global HMI keynote speaker and UX expert, based on his firsthand observations at CES 2026. According to Peter, CES 2026 marked a turning point for automotive UX. Over the past decade, the automotive industry has focused intensely on digital cockpits and ever-larger displays, but that trend is now approaching its peak, and the era of the “post-screen interface” is beginning to emerge. In the future, vehicle UX will expand into multimodal interfaces that combine voice, haptics, lighting, sound, and physical feedback, while AI will fundamentally reshape the relationship between humans and vehicle systems.


By Dr. Peter Rössger, beyond HMI/////
Peter.Roessger@beyond-hmi.de
Edited by Sang Min Han


Dr. Rössger runs beyond HMI/////, where he places people at the center of technology development and offers clients new visions for thinking, perception, decision-making, and execution. In addition to consulting, he is also active as an author and keynote speaker focusing on the relationship between humans and technology. Before founding beyond HMI/////, he spent four years in the electronics services industry, twelve years at Harman Automotive (now part of Samsung), and four years at Daimler. He holds a PhD in Human Factors Engineering from the Technical University of Berlin.

한글로보기





It was an intense four days: 63 kilometers, or about 39 miles, walked across the Las Vegas Convention Center. There were 21 scheduled meetings, as well as countless spontaneous conversations in hallways, booths, and shuttle buses. Add to that the CES shuttles and rides on the Las Vegas Monorail, and the entire event felt like one enormous current in motion. CES always compresses the future into a few days of exhaustion and inspiration. It reveals not only where technology is heading, but also how the relationship between technology and people is changing. This article is an attempt to hold on to the meaning of what I saw amid the noise, movement, and caffeine. It is neither a product catalog nor a trend list chasing hype. Instead, it reflects on the patterns, absences, contradictions, and signals that emerged in the relationship between technology and people, particularly in the areas of automotive, mobility, HMI, and AI.

 

Who Was There—and Who Wasn’t

The CES exhibition floors were still dominated by global technology and electronics companies. Major players such as LG, Panasonic, Sony, and Bosch once again had a strong presence this year, showcasing a wide range of technologies including displays, sensors, AI-based systems, smart environments, and platform technologies. Platform companies such as Amazon demonstrated that CES is increasingly being reshaped around software ecosystems, cloud infrastructure, and AI-centered business models rather than hardware alone. HERE represented mapping and location intelligence, Siemens stood for industrial technology and automation, and Garmin represented consumer-oriented mobility and navigation solutions.

Beyond the tech giants, another noticeable trend was the expansion of two-wheelers and micromobility. The increase in electric motorcycles, e-bikes, scooters, and shared mobility companies showed that CES’s mobility narrative is no longer confined to automobiles, but is extending in a more urban and flexible direction. Together with AI startups, sensor companies, and software platform companies, this further reinforced CES’s identity as a technology-first event that treats mobility less as a finished product and more as a field of digital, data-driven, and AI-enabled applications.

But more important than who was there was who was not. Although AI, connectivity, autonomous driving, and software-defined systems were among the core themes of CES, the presence of traditional automakers was noticeably weaker than in the past. Mercedes, Audi, Toyota, Fiat, and Stellantis were absent from the show floor. Among established vehicle brands, BMW was virtually the only one maintaining a formal presence, while from China only Geely and Great Wall stood out. Hyundai did participate, but it focused more on robotics and future automation than on a traditional automotive exhibition.

This absence raises a clear question: is CES still an attractive communication platform for automakers, or is the center of innovation storytelling shifting toward their own events or more regional formats? What is clear is that the relationship between CES and traditional automotive OEMs has entered a different phase.

 

Three Highlights

The first was SoundHound’s agentic AI voice interaction. The system demonstrated an advanced voice interface capable of maintaining context, handling complex multi-step intentions, and proactively supporting the user. In the automotive environment, this was especially impressive because of its potential to significantly reduce the driver’s cognitive load.

The second was NXP’s in-cabin sensing. By combining radar and smartwatches, among other inputs, NXP presented a sensor-fusion approach capable of understanding occupant condition and attention level, opening up new possibilities for safer and more comfortable interior experiences.

The third was Alps Alpine’s cross-domain HMI technology. Through an interaction portfolio that extended beyond the automotive sector, it showed how input and interface innovation from non-automotive fields can influence the future of automotive HMI design.

 


Realbotix

 

The Biggest Disappointment

The biggest disappointment was the low level of participation by automotive OEMs and major suppliers. This weakened CES’s role as a comprehensive platform for automotive innovation. It was especially unfortunate to see the absence of IAV, which had presented a highly future-oriented booth in 2025.

More fundamentally, there was also a lack of disruptive automotive trends. Individual technologies were impressive, but they rarely came together as part of a new narrative for the future of the car. Bold UX visions and paradigm-shifting ideas were also in short supply.

 

The Most Chilling Moment

The most chilling moment came in the North Hall, where humanoid robots were on display. Technically, they were astonishing—but the problem was the deliberate effort to make them look human. Hats, clothing, and silicone skin transformed technical admiration into discomfort. This is the uncanny valley in action.

Do we really need artificial humans? It may be more transparent—and far less disturbing—to design robots so that they clearly appear as machines, rather than as replicas of people.

 

What CES Did Not Say

CES 2026 was a rich event in terms of technical breadth and density of innovation, but it also revealed several important gaps. This was not a failure in the traditional sense, but rather an indication of where the industry is still hesitating.

AI, autonomy, and intelligent systems were highlighted throughout the exhibition, but ethical discussions largely remained at the level of safety and regulatory compliance. Questions of responsibility, long-term societal impact, transparency in decision-making, and human control were rarely addressed in the language of design or technical development. Ethics was often treated more like a checklist than a core element of technology development.

Another gap was the lack of reflection on how increasingly intelligent HMIs affect human cognition and psychology. Interaction technologies were impressive, but there was little discussion of their long-term effects on human attention, dependency, skill degradation, or the formation of trust. Critical discussion was also lacking on how automation and AI support systems might weaken human capabilities. Convenience and safety improvements were emphasized, but there was almost no questioning of what level of human capability should be retained or how systems might actively support human skill.

The exhibition excelled at showing what is possible, but devoted far less time to discussing what should be done, what should be avoided, and what responsibilities come with growing technological power.

 

Two Connected Major Trends: Robotics and Artificial Intelligence

The clearest trend this year was the convergence of robotics and artificial intelligence. What mattered was not simply that both technologies had become stronger individually, but that they were increasingly beginning to operate as one combined movement. It is precisely this fusion that is becoming a new source of value creation.

At CES, robots were beginning to move beyond executing predefined sequences. Thanks to the deep integration of vision-based perception, large language models, agentic AI architectures, and learning-based control, they were starting to understand context, interpret human intention, and adjust their behavior to changing environments.

From a human-centered perspective, this shift is highly significant. Technology only becomes meaningful when it genuinely improves human life—when it reduces effort and burden, increases safety, expands accessibility, and enables people to focus on what matters more. That value-oriented direction was clearly visible at the show. In logistics and industrial environments, intelligent robots demonstrated the potential to reduce strain and increase reliability. In service contexts, the direction was toward supporting everyday interaction rather than replacing people. In mobility as well, robotic systems were evolving toward safer, more predictable, and more inclusive mobility experiences.

The core logic is clear: AI gives robots contextual understanding, while robotics gives AI physical presence. Ultimately, the promise of both lies not in sophistication for its own sake, but in creating systems that are useful, trustworthy, and aligned with human needs. The most convincing examples were not those that showcased maximum autonomy or human likeness, but those that clearly demonstrated how intelligent machines can make work safer, mobility more reliable, and everyday life more manageable. In this sense, the convergence of robotics and AI marks a shift from technology as spectacle toward technology as a means of creating real human value.

 

Automotive Trends: Where Are We Heading?

CES 2026 once again showed that innovation in automotive and mobility is no longer driven by a single dominant technology. Rather than presenting a clear and unified automotive roadmap, the exhibition revealed the fragmented landscape of an industry in transition.

Software-defined vehicles, AI, automation and autonomous driving, two-wheelers, and micromobility each unfolded as separate narratives. At times these narratives reinforced one another; at other times they competed for attention. Combined with the selective participation of OEMs and suppliers, the strong presence of technology companies, and the growing influence of non-automotive sectors, the show demonstrated that the future of mobility is increasingly being shaped outside the traditional boundaries of the automotive industry.
 

Software-Defined Vehicles (SDV)

One of the most decisive trends was the rise of the software-defined vehicle. The smartphone industry offers a useful analogy. Today, the value and experience of a smartphone are defined less by the hardware itself than by software, apps, operating systems, and digital ecosystems. Cars are moving in the same direction. Vehicles are increasingly becoming updateable and expandable platforms, where functions and user experience are no longer fixed at the point of sale but continue to evolve through software. The value of a car now depends not only on its mechanical specifications, but on how it changes and expands over time.





Afeela 1


Tensor Car


 

Driving Automation and Vehicle Autonomy

The promise of autonomous driving remains alive, but the focus of the question has shifted from when will it happen? to how will it work, and what value will it create? The key issue is how the technology cooperates with people, how it builds trust, and what new experiences it enables beyond simply taking over the driving task. Autonomous driving is no longer just a driving technology—it is becoming a force that reshapes how in-vehicle space is used and how interface structures are designed.
 

The Growing Innovation of Two-Wheelers

The strong presence of electric motorcycles, e-bikes, and smart scooters meant more than just a display of alternative vehicles. It signaled that electrification and digitalization are spreading beyond cars and into personal mobility as a whole. Companies such as Matter and Verge demonstrated that two-wheelers are no longer merely simple and inexpensive means of transport, but can become technology-intensive products featuring high-performance electric drivetrains and sophisticated data connectivity.


 




Micromobility: Meaningful and Meaningless

Micromobility has the potential to address urban congestion, limited parking space, and short-distance commuting. If well integrated into the urban environment, it can meaningfully reduce the burden on existing transport infrastructure.

At the same time, CES also featured gimmick-like concepts whose usefulness was questionable. Products such as rideable electric board cases were technically interesting and entertaining, but they gave the strong impression of prioritizing novelty over genuine mobility value. This year’s exhibition showed both the serious potential of micromobility and the experimental blurring of the boundary between mobility and lifestyle gadgetry.
 

Artificial Intelligence in the Car

Artificial intelligence permeated CES as a whole, and its role in the automotive field was clearly expanding. Automotive AI is developing along two broad paths: AI inside the vehicle and AI in the vehicle development process.

In-vehicle AI is advancing through voice assistants, context-aware HMI, personalization, driver and occupant monitoring, and adaptive vehicle behavior, enhancing user experience, safety, and functionality. These systems are no longer merely reacting to simple inputs; they are increasingly moving toward understanding user intent and context.

Meanwhile, the quieter but potentially more transformative development in the long term is AI inside the development process itself. AI-based coding assistance, simulation, test automation, bug detection, and predictive quality assurance tools promise significant efficiency gains for an automotive industry struggling with software complexity and long development cycles. In the future, the competitiveness of car companies will depend not only on how intelligently their vehicles behave, but also on how intelligently they are developed.
 

Other Notable Themes: The Best of the Rest

Beyond the major trends, several secondary developments were also striking. One was the way vehicles are becoming part of a larger digital ecosystem. Cars are being redefined as connection points linked to smartphones, wearables, the cloud, and smart homes, while user experience is evolving toward continuity across devices.

Another was the expansion of human-centered safety beyond traditional ADAS. Safety is no longer only about improving intervention systems; it increasingly involves understanding driver and occupant attention, stress, fatigue, and cognitive load.

At the same time, platformization of hardware was progressing across vehicles, two-wheelers, and micromobility devices. Hardware is becoming more standardized, while differentiation is increasingly created through software, services, and digital experience.

Sustainability, too, appeared less as a grand slogan and more in practical forms such as efficiency, lifecycle optimization, resource reduction, and energy-efficient software strategies.

 

Automotive and Mobility Products

BMW Panoramic Drive

The BMW booth was once again one of the key highlights of CES this year. BMW understood CES not simply as a technology exhibition but as an experiential space. From the hospitality of offering coffee, water, and candy to the demonstrations of vehicles and infotainment concepts, the entire booth was staged like a theater of hospitality, storytelling, and innovation.

At the center of the presentation was BMW Panoramic Drive. BMW presented gamification and entertainment not as add-on features but as core elements of the in-vehicle experience. Given a future in which automation increases and free time inside the vehicle expands, this approach becomes even more meaningful. The effort to transform the vehicle interior into an immersive digital environment through large spatial displays and intelligent software was unmistakable.


 


Verge motorbike with a Donut

 

Verge Donut

Verge Donut is less a technology for one particular vehicle than a concept that invites us to rethink electric propulsion itself. Developed by Verge Motorcycles, it is a hubless in-wheel electric motor that delivers torque directly within the wheel. By reducing or eliminating traditional drivetrain components, it offers clear advantages in efficiency, packaging, and mechanical simplicity.

What is even more interesting is its scalability. It can be applied not only to motorcycles, but also to delivery vehicles, commercial trucks, special mobility platforms, and even non-ground-based mobility such as drones or autonomous robots. In that sense, Verge Donut looked less like a mere component and more like a building block for a wide range of mobility solutions.
 

TomTom

TomTom delivered a highly convincing presentation in a dedicated meeting room in the West Hall. The company focused on the evolution of maps and location intelligence as foundational infrastructure for software-defined mobility. Rather than presenting maps as static navigation tools, it framed them as continuously updated and interpreted AI-based location intelligence.

This suggests that maps will operate not merely as guidance systems, but as dynamic platforms connecting vehicles, services, and user experience.
 

AGC Glass and Gentex

AGC demonstrated how advanced glass technology can function as an active value-generating interface layer within the vehicle interior without becoming visually overwhelming. By integrating display, touch, lighting, and sensing into a single glass surface while maintaining a calm and restrained visual presence, the company aligned closely with the concept of Shy Tech, in which functionality only reveals itself when needed. From an HMI perspective, this enables a more human-centered experience by reducing cognitive load and preserving spatial and aesthetic clarity.

Gentex was equally impressive in a similar way. The company integrated displays, sensing, dimming, and driver monitoring functions naturally into familiar components such as mirrors. This extends functionality without adding new screens, and reduces cognitive burden. In an exhibition dominated by ever-larger displays and increasingly complex interfaces, Gentex showed that for UX and safety, the more important question may not be how large technology is made to appear, but where and how it is integrated.
 

Additional Mobility Products

Drones were again highly visible this year, particularly in logistics, inspection, surveillance, and emergency response applications. Unlike in the past, the emphasis was less on demonstrating the ability to fly and more on real applications involving autonomy, fleet operation, sensing, and integration into existing workflows.

By contrast, eVTOL remained controversial. Visually impressive and conceptually attractive as it may be, serious doubts remain about its real-world viability. Many projects still depend on optimistic assumptions regarding infrastructure, regulation, cost, noise, and public acceptance. It remained one of the fields where the gap between technological possibility and social value felt especially wide.

Compared with that, mobile equipment such as construction machinery and agricultural equipment offered far more realistic innovation stories. Autonomous or semi-autonomous tractors, construction machines, and utility vehicles demonstrated how automation, electrification, and AI can create immediate value in controlled environments.

Boats and marine mobility also emerged as an interesting area. Electric propulsion, assisted navigation, and automated docking systems showed how technologies developed in the automotive and robotics sectors are beginning to extend into the marine domain.
 

SoundHound

One of the most impressive and forward-looking demonstrations was SoundHound’s agentic AI-based voice interaction. It clearly showed how far voice interfaces have evolved beyond the traditional command-and-response model.

Rather than focusing on isolated voice commands, SoundHound presented a conversational agent capable of maintaining context over time, understanding complex multi-step intentions, and proactively supporting the user. In the automotive environment, the significance of this shift is particularly great. As vehicles evolve into software-defined platforms and functions become more complex, touch- and menu-based interfaces can easily overwhelm users. By reducing the need for visual attention and lowering the friction of interaction, SoundHound’s approach directly alleviates driver cognitive load and distraction.

Most importantly, it changes the status of voice itself. SoundHound presented voice not merely as an input channel, but as a genuine interaction partner capable of understanding intent, context, and continuity.
 

NXP In-Cabin Sensing

NXP presented in-cabin sensing not merely as a compliance feature for regulation, but as a foundational element of the software-defined vehicle. Its central approach combined radar sensors, smartwatch data, and intelligent algorithms to understand not only whether occupants are present, but also aspects of their physical and cognitive condition.

This was particularly significant in the context of Level 3 and higher automation, where in-cabin sensing data can be combined with ADAS and external vehicle sensors to enable real-time decision-making that considers not only the outside environment but also the condition of the occupant. From this perspective, the cabin is no longer a passive space within an autonomous system, but an active component of it.
 

Sensors for Driving Automation: LiDAR Finds Its Place

Sensor technology was a core theme across the automotive exhibition. Cameras, radar, ultrasonic sensing, and interior sensing were no longer presented as individual components, but as parts of an integrated sensing stack supporting safety, automation, and advanced UX.

The most noticeable shift was the changing status of LiDAR. For some time, LiDAR had been one of the most debated technologies in automotive, due to issues of cost, packaging, and reliability. But the mood has now clearly changed. The question is no longer whether LiDAR will be used, but where and how it can add the greatest value.

The solutions on display had evolved into automotive-grade LiDAR systems that were far smaller, more robust, and more cost-efficient than earlier generations. The conspicuous sensor towers of the past were being replaced by designs that are far easier to integrate aesthetically and structurally into vehicles.
 

NVIDIA Hyperion & Alpamayo / Android Automotive and SPARQ OS

NVIDIA’s release of the Alpamayo software stack needs to be viewed critically. On the surface, it appears to lower the barrier to entry for autonomous driving development through openness. But behind that lies a form of structural dependency. Companies using Alpamayo will ultimately train on NVIDIA hardware, infer on NVIDIA GPUs, and simulate within NVIDIA’s software environment. In that sense, open source is not simply openness—it can also function as a mechanism for reinforcing hardware and platform lock-in.

Android Automotive, meanwhile, is expanding beyond an IVI platform for cars into a flexible software foundation for mobility more broadly. This trend was particularly visible in the two- and three-wheeler segments. In this context, P3 Group’s SPARQ OS demonstrated how Android Automotive can be adapted to the unique conditions of two- and three-wheeled mobility, including limited display space, different control ergonomics, the need for high visibility, and the importance of maintaining rider focus. Rather than simply transferring automotive HMI into another category, it represented an approach based on designing a dedicated UX appropriate to that vehicle type.

 

Trends and Technologies in HMI

Humane HMI: Haptics and Beyond

One of the most prominent HMI trends was the move toward a more humane HMI—one that feels more natural, embodied, and intuitive to people. Beyond voice, haptics is now emerging as a core interaction modality. Instead of demanding constant visual attention, it enables technology to communicate quietly and directly through the body. Human-centered interaction is becoming increasingly multisensory, subtle, and physically grounded.

Grewus offered an impressive example of this embodied HMI through seat-integrated active haptics. By transforming the seat from a simple comfort component into a core interaction surface, it can deliver directional guidance, warnings, confirmations, and even immersive feedback for music and gaming directly to the occupant’s body.

 

Cockpit Architecture: After Peak Display

CES 2026 showed that the industry is now moving beyond the stage of peak display. Over the last several years, cockpit innovation has largely been driven by increasing the size and number of screens. That quantitative expansion now appears to have reached saturation. The focus is shifting from having more screens to integrating displays more intelligently and contextually into the overall structure of the vehicle interior.

This post-display strategy takes several forms. One is Shy Tech and material-integrated displays, where information appears only when needed and technology otherwise recedes into the background. Another is the growing use of non-visual interaction such as voice, haptics, and sound in order to reduce visual burden.

Ultimately, the cockpit is being reinterpreted less as a static control center and more as a flexible, inhabitable space. What matters now is no longer the size of the screen, but how elegantly the presence of technology is managed.
 

Alps Alpine HMI Technology / HMI Tooling and Process

Alps Alpine offered a broad demonstration of how sensors, actuators, and interface technologies can translate into seamless user experiences. Particularly impressive were its touch and haptic solutions, which enabled precise control without requiring prolonged visual attention. What made the exhibition even more convincing was its cross-domain relevance. Not all of the demonstrations were intended specifically for automotive use, but principles such as non-visual interaction, cognitive load reduction, multisensory design, and accessibility are directly transferable to in-vehicle HMI.

The foundation of high-quality automotive HMI ultimately lies in the tools and processes used to develop it. Elektrobit emphasized its role in closing the gap between UX concept and production. Rightware pointed toward solutions that can satisfy both rich graphics and deterministic real-time behavior. Candera demonstrated a model-based approach to handling complex HMI logic efficiently.

Today, HMI competitiveness depends not only on what is designed on the screen, but on how reliably, repeatedly, and production-ready that design can be implemented.

 

One More Thing: Beyond the Car

General HMI Trends: The Revenge of the Analog

One of the most striking impressions at this year’s show was what might be called the revenge of the analog. In an environment overflowing with AI demos, immersive displays, and touch-based interfaces, the longest lines often formed in front of booths offering low-tech or clearly physical experiences.

People constantly gathered around pinball machines. Others waited to play chess or Go against robots—not through touchscreens, but on real boards with physical pieces.

This was not simply nostalgia. In an environment oversaturated with screens, menus, and digital abstraction, people were more strongly drawn to interactions that were tangible, mechanical, and physically felt. Analog interaction offers direct manipulation, immediate feedback, and a clear sense of cause and effect.

What was especially interesting was that in the chess and Go demonstrations, the intelligence itself was inside the AI robot, but the interaction remained physical and familiar. The question, then, is not digital versus analog. It is how digital intelligence can be manifested in a human-friendly way. In that sense, the revenge of the analog is not a rejection of technology. It is a signal that the more complex systems become, the more human their interfaces must be.
 

Smart Glasses / Home Appliances as Computing Hubs

Smart glasses appeared in forms that were far more mature and realistic than before. Attention has shifted away from bulky XR headsets toward lightweight, natural-looking glasses suitable for everyday wear. More important than total immersion is subtle augmentation that supports the user quietly rather than dominating them. Smart glasses are steadily becoming ambient interfaces that intervene only when needed—for navigation, translation, or context-specific information.

Home appliances, meanwhile, are evolving beyond standalone devices into central computing hubs of the smart home. Fixed appliances such as refrigerators, ovens, and washing machines are gaining more computing power, connectivity, and AI, becoming the center of intelligent systems that coordinate services and data within the home. In effect, appliances are becoming quiet but reliable digital coordinators.


 


Exoskeleton: the physical extension of human capability




Exoskeletons and Physical AI

Exoskeletons are a form of embodied interface—not based on screens or control panels, but directly coupled to the human body, responding to movement and effort. They are evolving into practical tools that enhance safety and productivity by augmenting human capability exactly where it is needed most.

Physical AI refers to systems that perceive the real world through sensors, interpret situations, and act upon the environment through actuators, movement, and force. Unlike purely digital AI, physical AI must deal with uncertainty, safety, timing, and physical constraints. Many of the examples at the show demonstrated that AI is moving beyond mere judgment and into the stage of acting in the real world.


 


AI and board games

 

Open Questions

Looking back at CES 2026, one conclusion became unmistakably clear: the central challenge is no longer technical capability in itself, but Human Relevance.

The industry is increasingly capable of building intelligent systems, software platforms, and autonomous machines. But design that respects human cognitive ability, supports skill rather than causing technological deskilling, and builds trust rather than dependency remains far from fully resolved.

From a human-centered perspective, the future of mobility and technology will not be determined by who has added more AI or installed larger displays. It will depend instead on who better understands human limits, habits, emotions, and the need for control, clarity, and dignity in interaction.

Technology must not simply become louder, faster, or more autonomous. It must become more naturally integrated into human life. The future belongs not to machines that replace people, but to systems that understand, support, and empower them.

 

How Will the Future Automotive Industry Evolve?

The automotive industry is steadily moving from a product-centered manufacturing business toward a system- and platform-centered industry. Vehicles are now defined more by software architecture, AI capability, sensing, and connectivity than by engines or chassis.

CES 2026 made it clear that there is no single unified vision of the future automobile. Rather, the industry is fragmenting in multiple directions. Some companies are moving toward service-based autonomous mobility and fleet models, while others continue to focus on personally owned vehicles enhanced by automation and AI. The scope of mobility itself is widening beyond the car, extending into two-wheelers, micromobility, robotics, and hybrid ecosystems.

This suggests that what we call the automotive industry is no longer a single coherent industry, but an aggregate of different mobility domains operating according to different logics. At the same time, external technology platform providers—cloud companies, AI companies, semiconductor firms, and software companies—are increasingly determining the speed and structure of innovation.

The real danger here is not simply technological lag. It is the loss of sovereignty over where core value is created.

The automotive industry is not moving toward one future, but toward several parallel futures: from product to platform, from isolated vehicle to ecosystem, from hardware cycles to software speed, and from competition over technical specifications to competition over human experience.

The remaining question is not whether this transformation will happen. It is already happening.

The real question is who will actively design this transformation—and who will merely adapt to a framework designed by others.


 

AEM(오토모티브일렉트로닉스매거진)



<저작권자 © AEM. 무단전재 및 재배포 금지>


  • 100자평 쓰기
  • 로그인



TOP