Enabling Intelligent CAV with a Connected Data Lifecycle
Teraki ′The Secret Potion′ for Mass Production of L2+
Interview with Geert-Jan Van Nunen COO, Teraki
2021년 03월호 지면기사  /  written by Sangmin Han


From the left, Daniel Richart CEO, Markus Kopf CTO and  Geert-Jan Van Nunen COO 
    

Car makers will not rely on IPs and external suppliers like Mobileye, in the long run. They will be able to develop IP, self-driving and Level 2+ AI models on their own. They will also run and mass-produce it on conventional low-power hardware instead of a trunk full of GPUs and servers. Teraki aims to provide a recipe to help reduce costs, accelerate deployment, and achieve more than 99% accuracy needed to mass-produce in this OEM’s journey. Teraki's Geert-Jan Van Nunen COO talked about their edge AI technology and the latest connected car data management project. 

written by Sangmin Han_han@autoelectronics.co.kr
한글버전



“Hello, Han. I'm Nils Meesterburrie, the Korean business development team of German startup Teraki.
 AV car software development startup Teraki recently announced The Fusion Project.”
Shortly before Lunar New Year's day, a staff of Terraki delivered news. 

Airbiquity®, Cloudera, NXP® Semiconductors, Wind River® and Teraki™ announced The Fusion Project, an automotive industry collaboration to define a streamlined data lifecycle platform to advance intelligent connected vehicles. The pre-integrated hardware and software solution combines innovative technologies from leading companies for automakers to efficiently collect, analyze, and manage connected vehicle data for continuous feature development, deployment, and evolution.

As connected vehicle technology continues to advance, the amount of available data is exponentially increasing. This data can be used to power new data-centric features for consumers and revenue-generating opportunities for automakers. But efficiently collecting, analyzing, and managing vehicle data presents unique challenges to automakers because of fragmented data management across the machine learning lifecycle, inaccuracies caused by static machine learning models, limited capabilities in intelligent edge computing, and insufficient in-vehicle computing power.

The Fusion Project addresses these challenges by pre-integrating technologies from five industry providers into a solution that automakers can easily evaluate and introduce into vehicle design and production cycles for next generation connected and autonomous vehicles. The first application for the Fusion Project is specifying a solution for intelligent vehicle lane change detection utilizing synergistic technologies from each company. 

We were connected to Teraki's Geert-Jan Van Nunen COO and heard about their edge AI technology and The Fusion Project. Here is the Q&A with him.

 






Efficient Data Collection


Q. There are examples of start-ups that partner up or conduct PoCs with famous chipmakers and software vendors to sell into car manufacturers, and in some cases are being recognized in the industry and are already being selected by these car manufacturers. Could you give an introduction of Teraki and its AI and automotive capabilities and place the company in this background. 
A. Teraki specializes in improving customers’ AI-models in the automotive, drone and robot industries. Our product is edge pre-processing software that extracts the relevant information from any sensor(telematics, camera, lidar, radar, etc.). Our software is flexible, lightweight and highly compatible with ML and AI.  

Teraki’s edge AI-software delivers its customers two main benefits:  
Higher quality: High AI-model accuracy: the L2+ models of our customers become 10%-30% more accurate, this means safer and better cars. 
Low latency: latency improves with factor 10X. Enabling to run safe, real-time L2+ applications. 
Higher efficiency: Lower hardware costs due to lightweight edge footprint (RAM, CPU, power). Lower data transmission & storage costs (up to 50X less). 10X or more faster training of customers’ AI-models and pre-labelled data. No competitor comes even close to all of the above.
 
Typical use cases where customers apply our software: L2+ driving models (e.g. overtake manoeuvres, lane changes, turns, and other scenes that pose a challenge to ADAS/AD providers), sensor fusion, object detection, remote control / teleoperation, mapping and localization, and range extension. 
We are already engaged in series contracts. Furthermore we see the market demand accelerating as OEMs and AV-Tech companies race to overcome the challenges and truly deliver ADAS and AD capabilities in production cars. Teraki helps them to bridge that gap. 

 




Q. Connected & Autonomous Vehicle (CAV) data is key to car manufacturers' data-driven capabilities, product competitiveness, and revenue-generating opportunities. Data utilization is going to take place in a wide variety of areas. Why is the first focus of Fusion Project on lane change detection applications? 
A. Lane change is an important foundation for ADAS and AD as it is the start of Lane Keep and Lane Overtake applications. Please see above picture. Please also note that Lane Change is just an example use case to concretely proof the Fusion eco is working. TOI could be anything and can be defined by user beforehand. The main goal is efficient data collection in which we store only relevant information. 
 
Lane change detection was chosen as a first example to demonstrate the power of Teraki edge AI and the integration by the Fusion Partners. Conducting lane changes autonomously is still a challenge for ADAS. It requires a good understanding of the surrounding and the scenery as to when a lane change should be done or when it cannot be done, hence these type of scenarios need to be inserted into an AD training stack, but getting this data is tedious. 

Teraki edge AI is also able to detect other types of events like overtakes, different types of turns, lane departures, etc. It can automatically detect difficult scenes (e.g. construction sites or scenes with challenging lane markings) that L2+ AI-models without this edge intelligence would struggle with. 



For L2+ Deploying
 
Q. If lane change detection include detecting the intention of another cars, then this application would be a level 3 or higher autonomous driving application. How much more data do you need when compared to L2 and how much larger of an infrastructure(computing power, storage, etc.) do you need to handle it? 
A. In our definition lane change detection focuses on the lane change of the data recording car. This application example does not (yet) focus on when other vehicles change their lane. However, this could be easily programmed into our product as well.

As for most AI training one needs several hundred to thousands of representative samples of each class. As we focus on scenes relevant for higher level self-driving applications, we do not need larger infrastructure in terms of computing power and storage.
This is one of the benefits of Teraki's intelligent data selection and reduction of data at the edge. This is where we promise OEMs can run L3, L4 driving on CPUs instead of GPUs. Currently we’re demonstrating this for instance with customers for radar and camera fusion in low-powered, ASIL-D chipsets. 


AV-cars in Berlin

  
Q. To do this, you'll need to 1) define with your customers what data that is needed, 2) reduce the computing power and storage capacity to streamline the massive amount of data you generate with the appropriate algorithms, as well as for data enrichment purposes. Are the available technologies and tools still insufficient and lacking to support the automotive industry in relation to ADAS, AV, and AI? 
A. Yes. The development of AI methods for AD/ADAS often uses cutting edge technology and frameworks used and developed by (and mostly for) the research community. Still, tool providers (i.e. companies offering AI accelerators and ASIL-certified hardware) have understood the need to support AI focused software and the corresponding frameworks in their tool chain, so often data/model formats supported by both sides of the landscape to go from the development stage into production are being used, so a lot of progress is being made. 

I would like to clarify that however the main value that Teraki delivers lays not directly in the training of models but lays mostly in running these models efficiently and accurately in production-grade cars.  



99%+ reliability

Q. With what technology, and how does Teraki resolve these insufficiencies and filter the data? Also, the word "pre-integrated hardware and software solutions" sounds very important here. What does this mean exactly? 
A. Teraki’s intelligent edge AI understands what is happening with the car and its surroundings. We developed technology to detect “Region of Interest” (i.e. objects) and “Time of Interest” (i.e. events) in a given signal (video, point cloud, etc.). By using single or multiple sensor streams, Teraki distinguishes between different types of objects/scenes that a L2+ AI-model needs to handle. By allocating the highest resolution to the objects or events that matter, the ML benefits greatly and produces more reliable models. With this, we allow AD developers to focus on the relevant information and to improve and retrain their L2+ AI-models faster. The Teraki edge AI filters the required information for a (set of) AI-model(s) and allows the improved model(s) to run lightweight in the car. 

The pre-integrated hardware and software solutions, indeed is the important point here. With the other Fusion Partners we integrated an entire AI-model life cycle for production cars. We embedded the edge AI and OS on real production hardware (NXP) with we collected real data from real traffic situations, sent that to a cloud for ML, trained a first AI-model (lane change detection as first example), and deployed that to the car to run locally. Then we retrained and improved that AI-model and achieved an accuracy of 98%. Which is a great effort given the relative limited training. We offer this platform to OEM customers to create their own models and take these to the required 99%+ reliability levels. 





Q. As a result, how effectively and accurately can the Fusion Project perform lane change detection with a reduced infrastructure? What are the key features/specifications of the hardware in the project? 
A. Hardware. It’s regular, production-grade automotive hardware. Due to the intelligent data filtering approach, the hardware used at Teraki for model development does not require high in-car compute resources nor a data center for storage. It can rely on standard, low-powered, automotive hardware. The NXP hardware deployed in the car offers acceleration for faster AI-model execution. The hardware used is NXP BlueBox 2.0 and NXP Goldbox (S32G). You can find the specs in this video(https://vimeo.com/513344754). 

Intelligent edge. The standard workflow for the development L2+ AI-models "blindly" captures all data without filtering what is relevant and what is not. E.g. normal driving and lane keeping on highways can be considered as a "solved" problem, however driving in cities is far more challenging. Also, Teraki processing is in real time which means we don't need high end hardware for storing the data. We just need a ring buffer to store few seconds of data. 

Reliability. Teraki solution delivers a very high precision allowing to improve the precision of the AI-model to go from 98% to 99% and to 99%+.  


workflow




Q. Please explain the Fusion Project’s ADAS AI development workflow and explain how Teraki and Teraki’s partners fit in this development at what different stages. 
A. Please see this video for overview(https://vimeo.com/504737339).      
Vehicle sensor data is collected and processed on-board the vehicle using a combination of NXP BlueBox Autonomous Driving platform and the S32G GoldBox Service-Oriented Gateway platform running the Wind River Linux OS and Teraki Edge Analytics software. The Teraki Edge software is configured by the customer to select the Lane Change events for those events to be ingested by the Cloudera ML Platform. Processed vehicle data is transmitted off-vehicle to the Cloudera Data Platform integrated with the Teraki Platform for additional analytics, machine learning, reporting and storage. Via API’s customers can configure what information they want to ingest from their fleets of cars to train a specific AI-model. This accelerates the training of customers’ AI-models significantly. On-board Teraki Edge Analytics Modules are automatically updated using the Airbiquity OTAmatic Software Update Client on the NXP S32G using Wind River Linux and managed with the OTAmatic Software Update.

Teraki started with the "traditional" approach but can now speed up its AI development workflow due to the already deployed solutions. 

Correct. NXP provides the embedded hardware for data recording and embedded model execution; Cloudera provides cloud machine learning; Airbiquity provides the over-the-air update of improved models.  
 

Q. What are your next plans in the Korean and global automotive space? 
A. We see the Korean automotive space as world leading. Particular, when it comes to introducing new innovations and functionalities.
Our plan is therefore to expand our business in Germany and also work with Korean (and other global) OEMs to help them achieve L2+ functionalities in their own cars faster and - most importantly - more reliable.  

The OEM will own the IP and the development of these L2+ AI-models and will not remain or become IP-dependent on external suppliers such as Mobileye.  

Teraki bridges the gap towards production-scale volumes as we enable OEMs to run this on low-powered, existing production hardware (instead of GPUs and trunks full of servers). Thereby lowering costs, speed up implementation and provide the tool to achieve the 99%+ accuracy required. We want to accelerate the launches of safe L2, L3 and L4 cars in the world. 

As demonstrated with The Fusion Project.


[AEM] Automotive Electronics Magazine


<저작권자(c)스마트앤컴퍼니. 무단전재-재배포금지>


  • 100자평 쓰기
  • 로그인


  • 세미나/교육/전시

TOP