Mix and match miracles, autopilot AI chips stage architecture battle

After the brutal growth period from 2016 to 2019 and the shuffling period in 2019, autonomous driving has entered a new stage of development. Industry-leading companies such as Google Waymo, Baidu Apollo, Tesla, Nvidia, Mobileye and other industry-leading companies continue to iterate their technologies and accelerate the implementation of scene-based applications. To this end, China Electronics News has launched a series of reports on “Towards a New Era of Autonomous Driving”. Technology upgrading, manufacturer layout, industrial development, etc., describe the new look of the autonomous driving industry.

AI chips are hot, and self-driving AI chips are even hotter. NVIDIA, Intel, Tesla, Qualcomm, Horizon, Black Sesame Intelligence and other traditional chip factories and cutting-edge companies at home and abroad have poured into the automotive AI chip market. Today, the commercial realization of L2+ADAS autonomous driving is in full swing, and the landing route of L4 high-level autonomous driving is becoming clearer and clearer. There is no doubt that autonomous driving is becoming a highland for leading chip companies to seize.

Judging from the product routes of major manufacturers, the autonomous driving chip presents a pattern of co-prosperity among the three architectures of GPU, FPGA and ASIC. However, the underlying architecture is not the only factor that determines the ability of autonomous driving. With the improvement of the intelligence of the car, the requirements for software capabilities of autonomous driving are higher, and a “starting from hardware” autopilot chip race has been fully launched.

“CPU+XPU” is the mainstream trend of autonomous driving chip design

The level of intelligence of autonomous vehicles is getting higher and higher, and the volume of data that needs to be processed is increasing. High-precision maps, sensors, lidar and other hardware and software devices put forward higher requirements for computing, and main control chips with AI capabilities have become mainstream. , the acceleration chip can increase the computing power and boost the generation of algorithms. At present, common AI acceleration chips include GPU, ASIC, and FPGA.

Wang Xianbin, a senior analyst at Gasgoo Automotive Research Institute, pointed out to a reporter from China Electronics News that ECUs are commonly used in traditional vehicles, and the underlying chips are mainly CPUs. Autonomous driving requires a lot of real-time data transmission, and the computing power and functions of CPU alone cannot meet the requirements. The combination of CPU and GPU, FPGA, ASIC and other architectures to form “CPU+XPU” is the mainstream trend of autonomous driving chip design.

At present, mainstream manufacturers mostly use the combination of “CPU+XPU” to design autonomous driving chips. NVIDIA Xavier and Tesla FSD adopt the design route of “CPU+GPU+ASIC”. Xavier takes GPU as the computing core and mainly has 4 modules: CPU, GPU, Deep Learning Accelerator (DLA) and Programmable Vision Accelerator (PVA), Among them, GPU occupies the largest area; Tesla FSD uses NPU (an ASIC) as the computing core, and has three main modules: CPU, GPU and Neural Processing Unit (NPU). Among them, Tesla’s self-developed NPU occupies the largest area, mainly Used to run deep neural network, GPU is mainly used to run the post processing part of deep neural network.

Mobieye EyeQ5 and Horizon Journey series adopt “CPU+ASIC” architecture, EyeQ5 mainly has 4 modules: CPU, Computer Vision Processors (CVP), Deep Learning Accelerator (DLA) and Multithreaded Accelerator (MA). Among them, CVP is an ASIC designed for many traditional computer vision algorithms; Horizon independently designed and developed a dedicated ASIC chip Brain Processing Unit (BPU) for Al.

Waymo adopts “CPU+FPGA”, and the computing platform adopts Intel Xeon12 core or above CPU, with Altera’s Arria series FPGA.

Three architectures for racing high-level autonomous driving

“GPU is good at image recognition, and ASIC and FPGA can be flexibly designed to meet customized needs.” Wang Xianbin told a reporter from China Electronics News.

Autonomous driving must have high-precision and high-reliability image recognition capabilities. The original intention of GPU is to cope with the large-scale parallel computing required in image processing, which just meets the key technical requirements of autonomous driving. NVIDIA has accumulated technologies and markets in the GPU field for a long time. After entering the autonomous driving track, NVIDIA quickly occupied the market with its GPU. Its partners include Mercedes-Benz, Volvo, Hyundai, Audi, SAIC and other traditional car manufacturers, Weilai, Ideal, Xiaopeng, etc. New carmakers are also using Nvidia’s self-driving chips.

In August this year, NVIDIA launched its latest autonomous driving chipset, DRIVE Atlan. According to reports, the computing power of a single Atlan chip can reach 1000 TOPS, and it will be applied to L4 and L5 autonomous driving. Nvidia CEO Huang Renxun publicly stated that the Atlan SoC will provide samples to developers in 2023, and a large number of vehicles will be loaded in 2025. Wang Xianbin pointed out that in the future, there will be more GPUs with diversified architectures in autonomous driving chips. High-precision maps, sensors, and lidars have higher and higher requirements for image recognition capabilities, and the demand for GPUs will increase.

Tesla has followed a similar design route to Nvidia, but with a greater focus on ASICs. In August this year, Musk showed a self-developed chip cloud Dojo to the outside world at Tesla AI Day 2021. Dojo’s training CPU is an ASIC chip, which focuses on artificial intelligence training and can achieve BF16 computing power of 1024 GFLOPS. Tesla said that its efficiency exceeds the existing GPU and TPU, and it can greatly optimize the efficiency of algorithm improvement, paving the way for L4 and L5 autonomous driving. Tesla Dojo simulates a very close-to-real world in the cloud to train autonomous driving technology.

Musk has always believed that the only way to solve autonomous driving is to solve AI problems in the real world, whether it is hardware or software. Unless a company has strong AI capabilities and super computing power, it is difficult to solve the problem of autonomous driving. Dojo is based on the consideration of the problem of autonomous driving. The reason why Tesla chose ASIC is not difficult to understand. It is not what Tesla wants to provide general capabilities for all walks of life, and the advantage of ASIC lies in flexible design, which can better meet the customized needs of products.

Waymo can be said to belong to the FPGA faction. In 2017, Intel announced that it has been working with Google to develop self-driving cars since 2009, and also provides Xeon processors and Arria system chips (for machine vision) to Waymo, an autonomous driving company owned by Google’s parent company Alphabet. Arria is an FPGA chip, but Waymo is relatively low-key in terms of chips and has not exposed too many details. It is worth noting that in 2015 Intel acquired Altera, the main ASIC chip manufacturer; in 2017, it acquired Mobileye, and the MobileyeEye series of autonomous driving chips is a representative of the typical ASIC technology route.

Both Nvidia and Tesla’s new products are aimed at L4 and L5 autonomous driving. Waymo has been positioned at the high-end since its entry, and leading manufacturers have formed a trend of upgrading products around high-level autonomous driving.

Author: Zhang Yidi

Editor丨Lian Xiaodong

American Editor | Mary

The Links:   FP15R12KE3 G150XG01 V3