.jpg)
CES talks about “the future” every year. But what stood out most on this year’s stage wasn’t the shape of the future - it was the language used to describe it. LG began with life. In an age where technology keeps pushing us forward, LG did the opposite: it questioned technology itself. A good life, it suggested, isn’t about faster upgrades - it’s about moments when experience returns to us. That worldview ran through TV, home appliances, the CLOiD home robot, and finally mobility, arriving at one conclusion: SDV doesn’t start inside the car. It’s a way of redesigning the flow of life through software.
Bosch, on the next stage, opened with the opposite language: the ability to bridge the gap between the physical and the digital. And to ensure the idea didn’t remain abstract, Bosch grilled a steak live on stage. It wasn’t a playful show - it was a physical proof that a closed loop of sensing - control - AI - quality outcome can converge on a target result. SDV, too, is not merely about how well AI can talk; it’s about whether software can safely bring physical systems - brakes, steering, suspension, and the cabin - into a desired state.
At CES 2026, SDV didn’t exist only as an “automotive” keyword. It appeared as something completed first in the living room, proven again on a frying pan in the kitchen, and then extended into every space. Following LG and Bosch, I found myself rethinking what the SDV race is really about - and what we should be watching.
By Sang Min Han, AEM han@autoelectronics.co.kr
한글로보기
CES 2026 opened with LG Electronics. The next stage belonged to Bosch. What was striking was that both companies placed human beings at the center - yet the language they used to explain SDV was entirely different. LG framed SDV as a continuous experience across spaces and life. Bosch brought SDV back as the capability to control the physical world. Same CES opening, but a very different temperature.
.jpg)
.jpg)
LG: SDV Completed First in Life
CES 2026 did not begin with product explanations. The opening video - launched with the question, “Hey LG, what does the future look like?” - felt less like a declaration and more like a challenge to technology. Technology has begun to speak in our place, urging us to move faster, upgrade sooner, and follow more quietly. In the race for a “better life,” the essence of life itself has been fading. So the future LG drew wasn’t a flashy demo. It was a life that begins with music that makes you smile, where 9-to-5 doesn’t feel like labor, and where the car exists as a space that can open the heart - a future about us, about humans. Innovation should not run ahead of life or dominate it; only when it touches real experience does a good life become complete.
On stage, CEO Jaecheol Lyu drew a clear line in how LG defines AI:
“When everyone talks about AI, we asked one question: What kind of AI do people need? Our answer was ‘Affectionate Intelligence.’”
His next question was even more symbolic:
“What if AI could step out of the screen and work in real life?”
For LG, AI cannot remain a conversational interface or a cloud service. It must operate inside the realities of life - habits, emotions, and cultures that differ from person to person. And LG chose the home as its starting point. As a brand already deeply embedded in everyday living spaces, LG sees understanding the rhythm and context of real households as its competitive edge in the AI era. LG’s AI home vision is ultimately about giving people their time back.
TV: An AI Hub Designed to Disappear
In the TV segment, Aaron Westbrook brought out both numbers and architecture. The Wallpaper TV - existing “invisibly” in an ultra-slim 9mm form factor - was not a design gesture. It was an attempt to change the connection structure itself. True Wireless removes cables and clusters; LG spoke of 4K 165Hz wireless transmission (with ultra-low latency), improved brightness (3.9x), reflection suppression, and the Alpha 11 AI Processor Gen3. They cited NPU performance at 5.6x versus the previous generation, with CPU up 50% and GPU up 70%, framing it as design as performance.
But the core wasn’t hardware - it was the definition of the hub. Westbrook elevated the TV to the center of the AI home. A multi-AI structure on webOS integrating Google Gemini and Microsoft Copilot, Voice ID personalization, and security via LG Shield - these move the TV from “display” to a gateway through which life data and agents flow. For automotive, the vocabulary feels familiar: personalization, security, OTA, multi-agent - already operating as real-world language.
.jpg)
Home Appliances: Not “Added Features,” but a “Changed Role”
In the home appliance segment, Angela Gozenput described the shift not as feature additions but as a change in role. LG appliances are no longer machines waiting for commands; they are evolving into agent appliances that execute complex goals on their own. LLM-powered LG SIGNATURE refrigerators and ovens handle storage and cooking context through natural language, while built-in camera recognition evaluates cooking progress and intervenes proactively. The point isn’t an AI that “talks well,” but devices moving ahead of you toward life goals - freshness, cooking outcomes, routines.
.jpg)
Robot: A Physical Proof of Sense - Think - Act
At the center of LG’s worldview stood the robot. The home robot CLOiD, introduced by Brandt Varner, appeared as a physical embodiment of LG’s Sense - Think - Act structure. It recognizes the weather, reads the user’s condition and adjusts exercise plans, suggests dinner menus, and controls lighting and temperature - showing not conversational AI, but AI that acts.
LG defined the robot as a home-dedicated agent, connecting vision information and language to physical action through a Vision - Language - Action (VLA) model. And the motor and actuation know-how accumulated through appliances expands naturally into robotics. Here, the robot doesn’t look like an experiment; it looks like a natural evolution where appliances meet AI.
For the automotive world, this structure is not unfamiliar. Sense - Think - Act is exactly the core loop that autonomy and SDV seek inside vehicles: sense via sensors, decide via software, respond via physical systems. What becomes braking, steering, suspension, thermal management, and cabin control in a car becomes lighting, temperature, meals, and movement at home. SDV begins to look less like a “vehicle architecture problem” and more like a universal way of controlling the physical world through software.
.jpg)
Mobility: SDV Is Not a Car, but a Space
Finally, Randon Fuller brought every prior session into mobility. LG positioned itself not as an automotive parts supplier but as an Experience Architect. The declaration - bringing AI experiences that start at home onto the road - redefines mobility as orchestration of space experience rather than a bundle of parts.
LG’s scenarios focused less on braking or steering and more on spatial experience: gaze tracking and gesture recognition, the Mobility Display Solution extending the windshield into a display, continuity where content seen at home continues inside the car and across side windows, and real-time translation for sign language from outside the vehicle. The SDV LG described is the capability to compose a personalized space in real time, beyond the vehicle architecture itself.
LG’s SDV does not begin inside the vehicle. It is already completed in life and at home; the car is simply the next extension. SDV is not a car technology - it is the process of redefining the flow of life through software.
.jpg)
.jpg)
.jpg)
AEM(오토모티브일렉트로닉스매거진)
<저작권자 © AEM. 무단전재 및 재배포 금지>