Recently, ZTE’s V2X-OBU (on-board unit, V2X means Vehicle to Everything, which means that the car will communicate with all road participants under the framework of autonomous driving) has completed three major parts in the laboratory of the Institute of Information and Communications Technology in Beijing. There are a total of 66 test cases for V2X protocol consistency (PC5 interface security, network layer and message layer protocol consistency), and all the experimental results pass. This marks that ZTE’s V2X-OBU has passed the most critical milestone and is fully prepared for the “four-span” interconnection demonstration event in Shanghai in October.
From October 22nd to 24th this year, the C-V2X Working Group of the IMT-2020 (5G) Promotion Group, the China Intelligent Connected Vehicle Industry Innovation Alliance, the Society of Automotive Engineers of China, and Shanghai International Automobile City (Group) Co., Ltd. -V2X “four-span” interconnection application demonstration activity, which will realize the first domestic “cross-chip module, cross-terminal, cross-vehicle, cross-security platform” C-V2X application demonstration (referred to as “four-span” demonstration ), which will further promote the industrialization of C-V2X in China.
As a global leader in communication technology, ZTE has profound technology accumulation and rich business experience, and is committed to developing future technologies that lead 5G, Internet of Vehicles, and Internet of Things. While continuing the advantages of traditional automotive-grade modules, ZTE actively promotes the development of C-V2X. At present, ZTE has released a variety of in-vehicle modules and terminals around the world, and has launched a full range of C-V2X products for C-V2X: V2X modules, V2X OBU, RSU roadside units, aiming to provide vehicles with vehicle-to-vehicle Vehicle (V2V), Vehicle-to-Roadside Infrastructure (V2I), Vehicle-to-Pedestrian (V2P), and information exchange between cars and the Internet, together with partners to pave the way for autonomous driving in the future 5G era.
It is reported that ZTE has carried out field tests of the Internet of Vehicles and autonomous driving in Tianjin, Shandong, Jiangsu, Xiong’an and other places. , ZTE is actively involved in the work of standards, test beds, and demonstration areas, and together with partners in the industry chain, contributes to the development and maturity of ICI. At the beginning of September this year, ZTE’s Xuchang project was also shortlisted for the first batch of MEC and C-V2X fusion test beds in IMT-2020. The MEC capability was once again recognized by the industry, and ZTE’s C-V2X terminals will also be used in the MEC test bed project. .
ZTE has formed more than 30 5G+ series solutions and successfully implemented more than 50 demonstration projects in industrial Internet, big video, Internet of Vehicles, media, energy, public safety, medical care, education, ecological environmental protection, transportation and other industries, and has successfully implemented more than 50 demonstration projects. Customers in various industries have established strategic cooperation to jointly deploy 5G applications, reached cooperation with more than 200 industry-leading product providers, and launched 5G-based solutions for different industries. With the maturity of 5G network, the characteristics of high speed, large bandwidth and low latency of 5G network will also bring new development opportunities for in-vehicle communication. Unmanned driving is an important direction of 5G development. In this field, ZTE will continue to give full play to its own advantages to provide reliable communication channels for global users. The development of driving.
At this week’s 2020 VLSI Technology and Circuits Symposium, Intel will present a series of research findings and technical perspectives on the computing transformation caused by the growing amount of data distributed across the core, edge and endpoint. CTO Mike Mayberry will deliver a keynote speech titled “The Future of Computing: How Data Transformation is Reshaping VLSI,” highlighting the importance of transitioning from hardware/program-centric computing to data/information-centric computing.
“The massive flow of data across distributed edge, network, and cloud infrastructure requires energy-efficient and robust processing close to where the data is generated, often constrained by bandwidth, memory, and power resources .Intel Research at the VLSI Symposium highlighted several new ways to improve computational efficiency that show promise in a variety of application areas, including robotics, augmented reality, machine vision, and video analytics. The focus is on addressing the barriers to data movement and computing that represent the biggest data challenges of the future.”
– Vivek K. De, Intel Fellow, Director of Circuit Technology Research, Intel Research
what will be shown: Some Intel research papers will be presented in this symposium on how higher levels of intelligence and higher energy efficiency can be achieved in future edge-network-cloud systems to support a growing number of edge applications. Some of the topics covered in the research paper (full list of studies at the end of this press release) include:
Using ray casting hardware accelerators to improve the efficiency and accuracy of 3D scene reconstruction for edge robots
paper: Efficient 3D scene reconstruction via a ray-casting accelerator in 10 nm CMOS in edge robotics and augmented reality applications
important meaning: Certain applications, including edge robotics and augmented reality, require accurate, fast, and energy-efficient reconstruction of complex 3D scenes from large amounts of data generated by raycasting operations for dense simultaneous localization and mapping in real-time ( SLAM). In this research paper, Intel highlights a new ray-casting hardware accelerator that leverages new technologies to maintain scene reconstruction accuracy while achieving exceptionally energy-efficient performance. These innovative approaches include techniques such as voxel overlap search and hardware-assisted approximate voxel computation, which reduce the need for local memory, and also improve power efficiency for future edge robotics and augmented reality applications.
Utilize event-driven visual data processing units (EPUs) to reduce power consumption for deep learning-based video streaming analysis
paper: A 0.05pJ/pixel 70fps FHD 1Meps event-driven visual data processing unit
important meaning: Visual data analysis based on real-time deep learning is mainly used in fields such as safety and security, requiring fast detection of objects in multiple video streams, thus requiring long computing time and high memory bandwidth. The input frames from these cameras are often downsampled to minimize the load, which reduces image accuracy. In this study, Intel demonstrated an event-driven visual data processing unit (EPU) that, when combined with novel algorithms, can instruct deep learning accelerators to process visual input using only motion-based “target regions.” This novel approach alleviates the computationally intensive and high memory requirements in edge vision analysis.
Extend local memory bandwidth to meet the demands of AI, machine learning and deep learning applications
paper: 2x bandwidth burst 6T-SRAM designed for memory bandwidth limited workloads
Why it matters: Many AI chips, especially those used for natural language processing (such as voice assistants), are increasingly constrained by local memory. Addressing memory challenges requires providing frequency multipliers or increasing the number of memory slots at the cost of lower power consumption and area efficiency, especially for area-constrained edge devices. Through this study, Intel demonstrated how to use a 6T-SRAM array to provide 2x the read bandwidth on demand in burst mode, with 51% higher energy efficiency than frequency doubling and area efficiency over doubling the number of memory sockets 30%.
important meaning: In power- and resource-constrained edge devices, low-precision outputs are acceptable for some applications, and analog Binary Neural Networks (BNNs) can be used as an alternative to higher-precision neural networks. The latter are more computationally demanding and have intensive memory requirements. However, the prediction accuracy of analog BNNs is lower because they are less tolerant to process variation and noise. Through this study, Intel demonstrates the use of an all-digital BNN that has energy efficiency similar to analog input memory technology, while providing better robustness and scalability for advanced process nodes.
Other Intel research presented at the 2020 VLSI Symposium includes the following papers:
The future of computing: How data transformation is reshaping VLSI
Low clock power digital standard cell IP for high performance graphics/AI processors in 10nm CMOS
An autonomously reconfigurable power delivery network (RPDN) for multicore SoCs with dynamic current control
3D monolithic heterogeneous integration enables GaN and Si transistors on 300mm silicon wafers (111)
Low-swing and column-multiplexed bit-line technology for low-Vmin, noise-tolerant, high-density 1R1W 8T-bit cell SRAM in 10nm FinFET CMOS
A dual-rail hybrid analog/digital LDO with dynamic current control for tunable high PSRR and high efficiency
A 435MHz, 600Kops/J side-channel attack-resistant encryption processor for secure RSA-4K public key encryption in 14nm CMOS
A 0.26% BER 10^28 modeling challenge-response PUF in 14nm CMOS with Stability-Aware Adversarial Challenge Selection
An anti-SCA AES engine with 6000x time/frequency domain leakage suppression using nonlinear digital low-dropout regulators cascaded with 14nm CMOS computational countermeasures
SOT-MRAM CMOS compatible process integration with heavy metal bilayer bottom electrode and 10ns field-free SOT conversion with STT assistance
Self-folding write-assisted 10nm SRAM design with gate modulation reduces VMIN by 175mV with negligible power overhead
About Intel
Intel (NASDAQ: INTC) is an industry leader creating world-changing technologies that drive global progress and enrich life. Inspired by Moore’s Law, we continue to advance semiconductor design and manufacturing to help our customers meet their most important challenges. By infusing intelligence into the cloud, network, edge and computing devices of all kinds, we unlock the potential of data to make business and society a better place. For more information on Intel innovation, please visit Intel China newsroom.intel.cn and the official website intel.cn.
Intel Corporation, Intel, the Intel logo and other Intel logos are trademarks of Intel Corporation or its affiliates. Other names and brands mentioned herein are the property of their respective owners.
C114 News, Beijing time, June 23 (Ace) This Tuesday, Samsung Electronics announced the release of three new chip products that support the 3GPP R16 standard for 5G RAN equipment, including its third-generation millimeter-wave RFIC chip, second-generation millimeter-wave RFIC chip Generation 5G modem SoC and a digital front-end (DFE)-RFIC integrated chip. These latest chips will be used in Samsung’s next-generation 5G products, including its next-generation 5G Compact Macro, Massive MIMO radios and baseband units, all of which will be commercially available in 2022.
Samsung announced the chips at its recent “Samsung Networks: Redefined” online event. At the event, Samsung highlighted its more than 20 years of experience in chip self-development, and reiterated its significant investment behind the launch of multiple generations of chipsets starting with 3G, leading to the company’s leading 5G solutions today. Program.
Samsung said its new chipset is designed to take Samsung’s next-generation 5G product line to a new level, enabling related products to improve performance, improve energy efficiency, and reduce the size of 5G equipment products.
Specifically, Samsung’s chips include:
Samsung’s third-generation mmWave RFIC:
This chip continues the development of Samsung’s previous generation of RFICs. Launched in 2017, Samsung’s first-generation millimeter-wave RFIC chips are used in the company’s 5G FWA solution, which powers the world’s first 5G home broadband service in the U.S. market. Two years later, Samsung’s second-generation millimeter-wave RFIC chips were used in the company’s 5G Compact Macro product, which has since been widely deployed in the United States.
Samsung’s third-generation millimeter-wave RFIC chip supports both 28GHz and 39GHz frequency bands and will be embedded in Samsung’s next-generation 5G Compact Macro products. The chip uses advanced technology that can reduce the size of the antenna by nearly 50% and maximize the use of the internal space of the 5G base station. In addition, Samsung’s latest RFIC chips have improved power consumption, making 5G base stations more compact and lighter. In addition, the output power and coverage of Samsung’s next-generation RFIC chips have been improved, doubling the output power of its next-generation 5G Compact Macro products.
Samsung’s second-generation 5G modem SoC:
Samsung launched its first 5G modem SoC in 2019, which is used in the company’s 5G baseband unit and 5G Compact Macro products. So far, Samsung’s 5G modem SoC products have shipped more than 200,000 units.
Compared to the previous generation, this new second-generation 5G modem SoC will give Samsung’s upcoming baseband unit twice the capacity while cutting power consumption in half. In addition, Samsung’s second-generation 5G modem SoC supports both Sub-6GHz and mmWave frequency bands, providing beamforming capabilities and higher power efficiency for Samsung’s next-generation 5G Compact Macro products and Massive MIMO RF units, while reducing the size of the two products. The size of the solution.
Samsung DFE-RFIC integrated chip:
In 2019, Samsung launched its first digital/analog front-end (DAFE) chip as an important component of its 5G base station (which includes the Samsung 5G Compact Macro), supporting the 28GHz and 39GHz frequency bands.
Samsung’s new DFE-RFIC integrated chip combines RFIC and DFE functions for Sub-6Hz and mmWave frequency bands. By integrating these functions, the chip not only doubles the frequency bandwidth, but also enables Samsung’s next-generation 5G solutions, including the 5G Compact Macro, to reduce product size and improve output efficiency.
Junehee Lee, Executive Vice President and Head of R&D, Networking Business, Samsung said: “These newly launched chips are the foundational building blocks of our advanced 5G solutions, and through a long-term R&D process, Samsung has been at the forefront of delivering cutting-edge 5G technology. As one of the world’s largest semiconductor companies, we are committed to developing the most innovative chips for the next phase of 5G development and integrating product features that mobile operators seek to remain competitive.”
Anshel Sag, analyst at Moor Insights & Strategy, said: “5G chipsets are critical to enabling the performance capabilities required for next-generation network deployments. Samsung’s long-term experience with in-house chips is a key differentiator, enabling it to become the A leader in delivering 5G solutions that meet operator needs.”
Zhongguancun Online News: This afternoon, vivo mobile phone official WeChat officially announced that the APEX 2020 concept mobile phone will support 60W wireless super flash charging technology.
vivo APEX 2020 will support 60W wireless fast charging Previous news that the front of vivo APEX 2020 will adopt a waterfall-like design, officially called a “full-view integrated screen” that supports a 120Hz refresh rate. At the same time, according to the official poster, the phone is not equipped with a front camera, and it is unclear whether it is equipped with under-screen camera technology.
Rendering of vivo APEX 2020 revealed on the Internet
In other respects, the vivo APEX 2020 concept mobile phone will adopt the “Oreo” multi-camera solution and be equipped with a periscope multi-zoom lens to support high-magnification continuous optical zoom technology; the phone as a whole also continues the second-generation vivo APEX concept. The non-porous design of the machine, without any openings on the front, is driven by the micro-vibration unit to drive the screen to sound, the side uses a dual-sensing hidden button, and simulates real press feedback through a linear motor
[Introduction]Digi-Key has launched a new video series “Revolutionizing Automation” exploring cutting-edge automation and control technology. Powered by Omron and Siemens, this four-episode video series highlights how Digi-Key processes more than 5.3 million orders per year through an efficient supply chain that collaborates with many of the world’s leading suppliers Transform automation and control solutions including sensors, motors and controllers, robotics, connectors, power supplies, RFID and more.
Digi-Key is releasing a four-part video series titled Revolutionizing Automation with Omron and Siemens.
Eric Wendt, Director of Automation at Digi-Key, said: “Automation and control components are not only products in Digi-Key’s broad portfolio, they are the products we use every day to ensure fast, safe and efficient order fulfillment. Automation and control are A fast-growing market is critical to ensuring that global supply chains run smoothly through the ups and downs, so we are excited to share more about Digi-Key’s use of this technology in this video series.”
The first video in the video series, Totally Integrated Automation, is now live on the Digi-Key website. Based on interviews with Siemens leadership, this episode explores the various building blocks for Totally Integrated Automation, which are now available to Digi-Key customers.
In the second episode, Robots and Machinery, we show how Digi-Key leverages robotics from Omron and others to automate Digi-Key’s entire warehouse tasks. The episode is expected to be released in early April.
In late April, our third episode, Inventory Management and Sorting, visits Digi-Key’s new distribution center, which manages the industry’s largest in-stock electronics inventory, and highlights how Siemens products are helping automate the operation of this new facility of.
The fourth and final video in the series, titled “Efficiency and Worker Safety,” will launch in May and will focus on the many ways in which Omron’s automation solutions are used to simplify everyday tasks to keep workers safe.
Mark Binder, Director of Channels at Omron, noted: “The topic of automation has a long and storied history of enabling factories to increase productivity, increase flexibility and reduce work responsibilities. As innovation continues to drive manufacturers to automate more processes, finding a The right partner is essential to provide integrated, intelligent and interactive solutions. As a leader in automation technology, with solutions covering the entire production process, Omron is consistently committed to helping system integrators and machine builders cope with the constant changing needs while ensuring operational excellence.”
Kurt Covine, Director of Partner Sales at Siemens Digital Industries, said: “The future of this industry is here. Automation goes hand in hand with the digitization of production. Digitization can give businesses powerful competitive advantages such as greater flexibility, minimal downtime and improved productivity. High quality. Our resources are limited, and we need to do more with less. Partnering with Digi-Key gives our customers access to Siemens’ latest automation solutions to meet this challenge and realize the promise of digitalization.”
To watch this video series and learn how Digi-Key is revolutionizing the future of automation and control, visit the Digi-Key website.
About Omron
Omron Automation is a global leader in automation technology. We have the world’s most comprehensive product portfolio covering sensing, control, safety, vision, motion, robotics and services. We are passionate about innovation, have a soft spot for automation, and strive to create an environment where people and machines live in harmony. Our focus is on developing next-generation technologies to provide integrated solutions that optimize machines, production lines and businesses – making manufacturing safer and more efficient. With more than 30,000 employees in more than 120 countries, wherever your needs are, we provide you with local professional service and support around the world. With our proof-of-concept centers around the world, we have earned our customers the confidence that our solutions will work for the future they are building, and give them the freedom to start now and do what they do best – building world-class products .
About Siemens
Siemens Corporation is the US subsidiary of global technology giant Siemens AG, which has distinguished itself in the industry for more than 170 years with engineering excellence, relentless pursuit of innovation, quality, reliability and broad international reach. The company operates globally, focusing on intelligent infrastructure for buildings and distributed energy systems, as well as automation and digitalization for process and manufacturing. Siemens is committed to promoting the convergence of the digital world and the physical world to achieve inclusive benefits for customers and society as a whole.
About Digi-Key Electronics
Digi-Key Electronics is a global full-service authorized distributor of Electronic components, headquartered in Thief River Falls, Minnesota, USA, and distributes more than 10.2 million products from more than 2,200 premium brand manufacturers. Digi-Key also offers a wide variety of online resources such as EDA and design tools, datasheets, reference designs, instructional articles and videos, multimedia libraries, and more. 24/7 technical support via email, phone and live chat. For additional information or access to the world’s most extensive technology innovation resources, please visit www.digikey.cn and follow our WeChat, Weibo, Tencent Video and BiliBili accounts.
Zosi Automotive Research recently released the “2021 Electric Vehicle Charging Station and Charging Pile Market Research Report”.
The number of charging piles in the world is increasing rapidly, and the high-power fast charging network leads the growth
By the end of 2020, more than 11 million electric vehicles were on the road worldwide. Although the global auto industry has experienced a sharp recession under the influence of the epidemic, the number of global electric vehicle registrations in 2020 still increased by 41%. According to IEA (International Energy Agency) data, it is expected that global electric vehicle sales will reach 15-20 million in 2025.
Global electric vehicle sales forecast
Source: IEA
In this context, governments of various countries have accelerated the planning and construction of charging infrastructure. According to IEA data, the number of electric vehicle charging facilities worldwide in 2020 is 9.5 million, of which 2.5 million are public charging facilities. A conservative estimate is that by 2025, the number of electric vehicle charging facilities worldwide will increase to around 50 million, including about 10 million public charging facilities.
Global charging infrastructure development expectations
Source: IEA
At present, the number of charging piles in my country ranks first in the world. As of the end of 2020, the number of new energy vehicles in my country has reached 4.92 million, and the total number of charging piles is about 1.681 million, including 874,700 private charging piles and 806,000 commercial charging piles (including public and dedicated). 0.34:1.
It is estimated that by 2025, the number of new energy vehicles in my country will reach 17.82 million, and the total number of charging piles will be about 9.391 million, including 6.183 million private charging piles and 3.208 million commercial charging piles (including public and dedicated ones). Car ratio 0.53:1.
Prediction of the number of charging infrastructure in my country (unit: 10,000)
Source: Zosi Automotive Research
At present, China’s expressway fast charging network has basically taken shape, ranking first in the world. As of 2020, a total of 2,251 charging stations and 9,065 charging piles have been built on 42 expressways, with a service mileage of 54,000 kilometers, accounting for 35% of the national expressway mileage. Judging from the summary of bidding information for high-speed charging equipment of the State Grid over the years, the high-speed charging piles are mainly 80-160KW, and the laying of 240/480KW super-power super-charging piles has been carried out.
Quantity and power distribution of State Grid high-speed charging equipment tenders from 2014 to 2021
Source: State Grid Tendering, Zosi Automobile Research Institute analysis and analysis
Mid-to-high-end smart electric vehicle brands vigorously deploy charging network construction
The added value of smart electric vehicle brands has increased significantly, bringing about consumption upgrades in the automotive industry. In addition to the intelligence and quality of the vehicle itself, the improvement of consumers’ charging quality is also crucial. Zosi Automobile Research once wrote an article in August 2018, “Can the battery swap model that NIO and BAIC New Energy bet on subvert the industrial ecology?” It clearly stated that NIO builds a closed business scenario through the battery swap model, which greatly improves the It is a very clever business strategy to improve the brand value and service level of NIO.
Many OEMs have also realized the importance of closed (or semi-closed) charging networks, including mid-to-high-end start-up brands such as Tesla, NIO, Xiaopeng, and Ideal, as well as pure electric high-end brands of traditional OEMs, such as Geely Extreme Krypton, GAC Aian, BAIC ARCFOX Extreme Fox, SAIC R Auto, Volkswagen ID, etc., have also started or planned to deploy on supercharging stations.
Layout of self-built and self-operated charging networks by Chinese OEMs
Source: Zosi Automotive Research
Our analysis believes that at present, OEMs mainly adopt three charging network construction and operation modes:
01 Mode 1: Completely self-built and self-operated “closed overcharge” system
This model is very expensive and requires a very high market presence to maintain operations, represented by Tesla. In China, Tesla has laid a large number of charging piles. Although it has been switched to the national standard interface, in fact, Tesla’s charging network is almost closed to the outside world, and it is still a very closed charging network. Although Tesla has always claimed that it can open supercharged piles to its peers, we believe that the possibility of opening up in the short term is very small.
Tesla has built more than 800 super charging stations and 6,300 super charging piles in China, with more than 710 destination charging stations, and the charging network covers more than 290 cities. In 2021, Tesla will put into operation a super charging pile factory in Shanghai, with an initial planned annual production capacity of up to 10,000, mainly V3 super charging piles.
With the widespread deployment of Tesla’s closed charging network, Tesla has actually formed a strong consumer barrier in China. Even if Tesla faces many doubts in the short term, Tesla will still occupy an important position in China in the long run. status, and its closed charging network has become one of the key elements of business success.
Tesla’s charging infrastructure distribution in China
Source: Tesla
02 Mode 2: Completely self-built and self-operated “closed battery swap + open supercharge” system
In addition to supercharging stations, swapping stations are also the main way of charging layout of OEMs. NIO regards battery swapping as one of its core business models. Weilai introduced the vehicle-electricity separation mode, and took the lead in establishing Wuhan Weineng Battery Assets Co., Ltd., responsible for the management and operation of batteries.
In April 2021, NIO cooperated with State Grid and began to deploy the second-generation power exchange station across the country. The new power exchange power station technology can support the function of one-key power exchange in the car without users getting out of the car; it can provide up to 312 batteries per day. The second battery replacement service can effectively improve the battery replacement efficiency.
As of June 2021, NIO has deployed 249 swap stations, 177 overcharge stations, and 1,408 overcharge piles across the country. NIO has cooperated with State Grid to launch the second-generation power exchange station, and plans to achieve a nationwide deployment of 500 power exchange stations by the end of 2021. With the gradual improvement of NIO’s car charging and swapping network, we predict that NIO may appropriately explore the brand pricing range and further seize the market in the range of 250,000 to 350,000.
The power-up scene of NIO
Source: NIO
03 Mode 3: Cooperatively operated “open supercharge” system + part of self-built and self-operated “closed supercharge” system
Different from the relatively closed charging network construction and operation mode of NIO and Tesla, Xiaopeng mainly cooperates with third-party operators such as Teel to build a free supercharging network, which greatly reduces network laying and operating costs.
At the same time, Xiaopeng also started the construction of its own brand exclusive charging station similar to Tesla and Weilai, and further upgraded the charging brand service. By the end of 2021, more than 500 supercharging stations under the Xpeng Motors brand will be planned. We expect that the price of the next pure electric SUV released by Xiaopeng will be further explored.
Xpeng Motors own brand exclusive charging station
Source: Xiaopeng Motors
Up to now, the overall charging network layout of Xiaopeng Motors has reached 164 cities, 1,140 free charging stations, and 19,019 free charging piles (some of which are self-built by Xiaopeng). It is expected to cover more than 200 major cities nationwide by the end of 2021. Since September 2020, Xiaopeng Motors launched the “Lifetime Free Charging Plan (annual limit of 3000 degrees)”, and sales growth has been strong.
Xpeng Motors monthly insurance volume data
Source: Zoth Database
In addition, Volkswagen (China), FAW, Jianghuai Automobile, and Wanbang New Energy jointly established the charging operator “CAMS”. Similar to the model of Xiaopeng Motors, it also adopted an open + partially closed (ground lock) phase. combined operating model.
For other OEMs, we believe that “Mode 3” is likely to be adopted, but each manufacturer may have slightly different strategies and layout ideas. For example, the newly released Geely Krypton 001 is the first 800V platform model in China and will be equipped with the industry’s current The 360kW pole charging pile with the fastest charging efficiency can be charged for 5 minutes at the fastest and has a range of 120 kilometers. According to the plan, in 2021, Jikr will complete the construction of 290 charging stations and 2,800 charging piles; it is planned that by the end of 2023, the cumulative number of charging stations will reach 2,200, and the cumulative number of charging piles will reach 20,000. So far, the “extreme charging network” of Jikr Automobile has not been officially unveiled.
In the future, with the promotion of 800V high-voltage fast charging architecture technology, whether it is a foreign-funded enterprise or an independent brand, it will gradually form a trend to do product layout on a high-voltage platform in the future, and high-end smart electric brands will build their own charging network.
Cirrus’ CS470xx series is a new generation of audio system-on-chip (ASOC) 32-bit processors with 300,000,000 multiplication operations per second (MAC/S) targeted for high-fidelity cost-sensitive designs. The series integrates S/ PDIF Rx, S/ PDIF Tx, analog input, analog output and sample rate converter (SRC), can simplify system design and reduce total system cost. Mainly used in car audio, DTV, MP3 docking station, AVR and DVD Rx, DSP Controlled speakers (sound bar, subwoofer). This article describes the main features of the CS470xx series, the CS47048 block diagram, the CDB47xxxD main and daughter board block diagrams and detailed circuit diagrams.
The CS470xx family is a new generation of audio system-on-a-chip (ASOC) targeted processors at high fidelity, cost sensitive designs. The CS470xx simplifies system design and reduces total system cost by integrating the S/PDIF Rx, S/PDIF Tx, analog inputs, analog outputs, and sample rate converters (SRCs). For example, a hardware SRC can down-sample a 192 kHz S/PDIF stream to a lower Fs to reduce memory and MIPS requirements for processing. This integration effectively reduces the chip count from 3 to 1, which allows smaller, less expensive board designs.
The CS470xx is programmed using the simple yet powerful Cirrus proprietary DSP Composer™ GUI development and pre-production tuning tool. Processing chains can be designed using a drag-and-drop interface to utilize functional macro audio DSP primitives and custom audio filtering blocks. The end result is a software image that is downloaded to the DSP via the serial control port.
The Cirrus Framework™ programming environment offers Assembly and C language compilers and other software development tools for porting existing code to the CS470xx family platform.
The CS470xx is available in a 100-pin LQFP package with exposed pad for better thermal characteristics. Both Commercial (0℃ to +70℃) and Automotive (–40℃ to +85℃) temperature grades are available.
CS470xx series main features:
Cost-effective, High-performance 32-bit DSP
300,000,000 MAC/S (multiply accumulates per second)
Dual MAC cycles per clock
72-bit accumulators are the highest precision in the industry
32K x 32-bit SRAM with three 2K blocks assignable to either Y data or program memory
Integrated DAC and ADC Functionality
8 Channels of DAC output: 108dB DR, –98dB THD+N
4 Channels of ADC input: 105dB DR, –98dB THD+N
Integrated 5:1 analog mux feeds one stereo ADC
Configurable Serial Audio Inputs and Outputs
Integrated 192 kHz S/PDIF Rx
Integrated 192 kHz S/PDIF Tx
Supports 32-bit serial data @ 192 kHz
Supports 32-bit audio sample I/O between DSP chips
TDM I/O modes
Supports Different Sample Rates (Fs)
Three integrated hardware SRC blocks
Output can be master or slave
Supports dual-domain Fs on S/PDIF vs. I²S inputs
DSP Tool Set with Private Keys Protect Customer IP
Integrated Clock Manager/PLL
Flexibility to operate from internal PLL, external crystal, external oscillator
Input Fs Auto Detection with μC Acknowledgement
Host Control and Boot via I²C™ or SPI™ Serial Interface
Configurable GPIOs and External Interrupt Input
1.8V Core and a 3.3VI/O that is tolerant to 5V input
Low-power Mode
CS470xx series target applications:
Automotive head units and outboard amplifiers
Automotive processors and automotive integration hubs
The integrated circuit chip and the package are an inseparable whole. No chip can work normally without packaging. Packaging is essential for chips. With the advancement of IC production technology, packaging technology is constantly updated, and each generation of IC is closely linked to a new generation of IC packaging technology. . In the 21st century, as semiconductor technology gradually approaches the limit of silicon process size, semiconductor technology has entered the “post-Moore’s Law” era, and advanced packaging technology has achieved unprecedented development.
SiP creates traditional “ASIC” at low cost
Often to advance the design, the industry uses ASIC technology to integrate different functions onto a single chip. But as IC manufacturing processes continue to scale, ASIC scaling becomes more difficult, the power/performance benefits shrink, and the cost of tape-out becomes higher. Advanced packaging is beginning to become another important innovation direction for foundries and packaging houses.
In a way, SiP is a traditional “ASIC” built at a lower cost. SiP modules can integrate chips and passive devices of different processes and materials into a system to achieve the functional requirements of Electronic products.
Most chips still use mature and commercial packaging technology. Even so, advanced packaging technology that can provide smaller size and better electrical performance has been favored by more and more IC suppliers. In addition, the need for miniaturization, which combines shorter development time and cost-effectiveness, is driving the mass adoption of SiP. According to Yole, the SiP market is expected to grow to $1.88 billion by 2025, with a CAGR of 6% during the forecast period (2019-2025).
SiP is currently mainly used in TWS headsets, smart watches, smartphones, servers and other fields. In the future, it will appear in more wearable products such as smart glasses, 5G millimeter wave modules, smart cars, biomedicine and other applications that have special requirements for size. .
Using SiP technology can also solve the “Memory Wall” problem that plagues system performance bottlenecks and achieve better EMI Shielding functions. In addition, Zhao Jian, AVP of USI’s Advanced Process R&D Center and Miniature Module Business Division, pointed out that through heterogeneous integration, SiP can also reduce processes in back-end assembly plants, and more highly automated processes can be integrated at the front end, thereby reducing supply. Overall chain complexity.
The ultimate goal of SiP is to achieve fully integrated self-contained autonomous electronic systems. This means that the SiP will have independent power supplies, microprocessors, inputs, outputs, and passives, and be able to perform exactly the required functions without external wiring. The ideal SiP has no external pins (if it has its own power supply) or only 2 pins – for power and ground. However, the premise is that the power distribution network (PDN), signal integrity (SI) and thermal management, as well as the reliability of the system are all handled properly.
Chiplet will drive up R&D costs for heterogeneous integration
In addition, as the number of functional modules integrated increases and the chip size becomes larger, the most direct impact will be a reduction in yield. One of the solutions is to cut the chip into multiple chiplets, and then use the high-density interconnection provided by advanced packaging technology to integrate these chiplets in the same package. Presumably, this could reduce production costs while increasing yields.
Industry insiders always believe that Chiplet can also be regarded as a SiP technology, which is the commercialization of IP modules in SoCs.
The industry view is that the modular design scheme based on Chiplet will push up the research and development cost of next-generation heterogeneous integration technology and drive the further development of packaging technology. According to different price and performance requirements, Chiplet can provide a variety of package options. This trend is expected to make the front and back semiconductor processes that used to use different tools and equipment become more and more similar.
Challenges are also evident. For example, Fang Lizhi, deputy general manager of Licheng, pointed out earlier that in terms of cost reduction, the biggest cost risk factor of advanced packaging is to package faulty chips with normal chips, resulting in modules that cannot work normally. This is especially evident in wafer-to-wafer (W2W) packaging, where there is no way to lock KGD (known good die) and reject faulty die in advance.
As more and more single chips are integrated, the quality requirements for incoming chips will become more and more stringent. In addition, licensable IP makes assembling chips easier, but also more expensive.
Packaging technology innovation faces many challenges
Advanced packaging undoubtedly presents many new challenges. In addition to materials and equipment, as advanced packaging becomes more and more complex, existing EDA design tools need to solve many problems in completing advanced packaging design.
Ling Feng, founder and CEO of Xinhe Semiconductor, pointed out that between different chips, between chips and packages, there is a “wall” in the design and analysis of EDA tools. How to break the “wall” between the chip and the package, how to achieve the consistency of EDA platform data, complete the collaborative design of signals, power, heat, stress, etc., and how to complete the simulation analysis in a unified database, will become the further development of SiP technology. important driving force for development.
For the domestic industry, the situation is more severe and there are more challenges.
Santosh Kumar, chief analyst at research firm Yole Developpement, once pointed out that although the market size of advanced packaging is growing rapidly, the relationship of the supply chain will become more complex than ever. Many links in the industry chain have entered the system-in-package market, including IDM, foundry and EMS. In the future, this market will usher in more players, and substrates/PCBs and foundries may share some of the market pie of advanced packaging. Therefore, in addition to the diversified technical layout, packaging factories must also learn to respond more flexibly to the new industrial environment.
At present, there are a number of packaging and testing companies in the forefront of the industry, such as Changdian Technology, Tongfu Microelectronics, and Tianshui Huatian. Among them, Changdian Technology now has SiP that can compete with ASE, and Fan- OuteWLB, WLCSP, SiP, Bumping, FC-BGA and many other packaging technologies.
Industry experts believe that in the future, packaging and testing companies with complete system packaging and system module integration capabilities will be more likely to be favored by the market. Huachuang Securities also pointed out that in the context of the slowdown of Moore’s Law, the packaging link is becoming more and more important to improve the overall performance of the chip. With the development of advanced packaging in the direction of miniaturization and integration, the technical barriers continue to increase. In the future, the packaging and testing link may replicate the development path of the foundry link, that is, the market size of advanced packaging is rapidly increasing, and leading manufacturers with leading technologies will enjoy Maximum bonus.
Although domestic advanced packaging technology has made progress, it still faces many challenges. Data shows that in 2020, the global market share of China’s advanced packaging market output value is only 14.8%. Compared with the world-class packaging and testing enterprises, there is still a big gap in the comprehensive technical level in China. However, under the background of domestic substitution, packaging and testing, as a sub-industry with the most prominent advantages in my country’s semiconductor field, will continue to maintain a growth trend. New challenges and problems at the advanced packaging level are waiting for domestic manufacturers to break the situation.
After the brutal growth period from 2016 to 2019 and the shuffling period in 2019, autonomous driving has entered a new stage of development. Industry-leading companies such as Google Waymo, Baidu Apollo, Tesla, Nvidia, Mobileye and other industry-leading companies continue to iterate their technologies and accelerate the implementation of scene-based applications. To this end, China Electronics News has launched a series of reports on “Towards a New Era of Autonomous Driving”. Technology upgrading, manufacturer layout, industrial development, etc., describe the new look of the autonomous driving industry.
AI chips are hot, and self-driving AI chips are even hotter. NVIDIA, Intel, Tesla, Qualcomm, Horizon, Black Sesame Intelligence and other traditional chip factories and cutting-edge companies at home and abroad have poured into the automotive AI chip market. Today, the commercial realization of L2+ADAS autonomous driving is in full swing, and the landing route of L4 high-level autonomous driving is becoming clearer and clearer. There is no doubt that autonomous driving is becoming a highland for leading chip companies to seize.
Judging from the product routes of major manufacturers, the autonomous driving chip presents a pattern of co-prosperity among the three architectures of GPU, FPGA and ASIC. However, the underlying architecture is not the only factor that determines the ability of autonomous driving. With the improvement of the intelligence of the car, the requirements for software capabilities of autonomous driving are higher, and a “starting from hardware” autopilot chip race has been fully launched.
“CPU+XPU” is the mainstream trend of autonomous driving chip design
The level of intelligence of autonomous vehicles is getting higher and higher, and the volume of data that needs to be processed is increasing. High-precision maps, sensors, lidar and other hardware and software devices put forward higher requirements for computing, and main control chips with AI capabilities have become mainstream. , the acceleration chip can increase the computing power and boost the generation of algorithms. At present, common AI acceleration chips include GPU, ASIC, and FPGA.
Wang Xianbin, a senior analyst at Gasgoo Automotive Research Institute, pointed out to a reporter from China Electronics News that ECUs are commonly used in traditional vehicles, and the underlying chips are mainly CPUs. Autonomous driving requires a lot of real-time data transmission, and the computing power and functions of CPU alone cannot meet the requirements. The combination of CPU and GPU, FPGA, ASIC and other architectures to form “CPU+XPU” is the mainstream trend of autonomous driving chip design.
At present, mainstream manufacturers mostly use the combination of “CPU+XPU” to design autonomous driving chips. NVIDIA Xavier and Tesla FSD adopt the design route of “CPU+GPU+ASIC”. Xavier takes GPU as the computing core and mainly has 4 modules: CPU, GPU, Deep Learning Accelerator (DLA) and Programmable Vision Accelerator (PVA), Among them, GPU occupies the largest area; Tesla FSD uses NPU (an ASIC) as the computing core, and has three main modules: CPU, GPU and Neural Processing Unit (NPU). Among them, Tesla’s self-developed NPU occupies the largest area, mainly Used to run deep neural network, GPU is mainly used to run the post processing part of deep neural network.
Mobieye EyeQ5 and Horizon Journey series adopt “CPU+ASIC” architecture, EyeQ5 mainly has 4 modules: CPU, Computer Vision Processors (CVP), Deep Learning Accelerator (DLA) and Multithreaded Accelerator (MA). Among them, CVP is an ASIC designed for many traditional computer vision algorithms; Horizon independently designed and developed a dedicated ASIC chip Brain Processing Unit (BPU) for Al.
Waymo adopts “CPU+FPGA”, and the computing platform adopts Intel Xeon12 core or above CPU, with Altera’s Arria series FPGA.
Three architectures for racing high-level autonomous driving
“GPU is good at image recognition, and ASIC and FPGA can be flexibly designed to meet customized needs.” Wang Xianbin told a reporter from China Electronics News.
Autonomous driving must have high-precision and high-reliability image recognition capabilities. The original intention of GPU is to cope with the large-scale parallel computing required in image processing, which just meets the key technical requirements of autonomous driving. NVIDIA has accumulated technologies and markets in the GPU field for a long time. After entering the autonomous driving track, NVIDIA quickly occupied the market with its GPU. Its partners include Mercedes-Benz, Volvo, Hyundai, Audi, SAIC and other traditional car manufacturers, Weilai, Ideal, Xiaopeng, etc. New carmakers are also using Nvidia’s self-driving chips.
In August this year, NVIDIA launched its latest autonomous driving chipset, DRIVE Atlan. According to reports, the computing power of a single Atlan chip can reach 1000 TOPS, and it will be applied to L4 and L5 autonomous driving. Nvidia CEO Huang Renxun publicly stated that the Atlan SoC will provide samples to developers in 2023, and a large number of vehicles will be loaded in 2025. Wang Xianbin pointed out that in the future, there will be more GPUs with diversified architectures in autonomous driving chips. High-precision maps, sensors, and lidars have higher and higher requirements for image recognition capabilities, and the demand for GPUs will increase.
Tesla has followed a similar design route to Nvidia, but with a greater focus on ASICs. In August this year, Musk showed a self-developed chip cloud Dojo to the outside world at Tesla AI Day 2021. Dojo’s training CPU is an ASIC chip, which focuses on artificial intelligence training and can achieve BF16 computing power of 1024 GFLOPS. Tesla said that its efficiency exceeds the existing GPU and TPU, and it can greatly optimize the efficiency of algorithm improvement, paving the way for L4 and L5 autonomous driving. Tesla Dojo simulates a very close-to-real world in the cloud to train autonomous driving technology.
Musk has always believed that the only way to solve autonomous driving is to solve AI problems in the real world, whether it is hardware or software. Unless a company has strong AI capabilities and super computing power, it is difficult to solve the problem of autonomous driving. Dojo is based on the consideration of the problem of autonomous driving. The reason why Tesla chose ASIC is not difficult to understand. It is not what Tesla wants to provide general capabilities for all walks of life, and the advantage of ASIC lies in flexible design, which can better meet the customized needs of products.
Waymo can be said to belong to the FPGA faction. In 2017, Intel announced that it has been working with Google to develop self-driving cars since 2009, and also provides Xeon processors and Arria system chips (for machine vision) to Waymo, an autonomous driving company owned by Google’s parent company Alphabet. Arria is an FPGA chip, but Waymo is relatively low-key in terms of chips and has not exposed too many details. It is worth noting that in 2015 Intel acquired Altera, the main ASIC chip manufacturer; in 2017, it acquired Mobileye, and the MobileyeEye series of autonomous driving chips is a representative of the typical ASIC technology route.
Both Nvidia and Tesla’s new products are aimed at L4 and L5 autonomous driving. Waymo has been positioned at the high-end since its entry, and leading manufacturers have formed a trend of upgrading products around high-level autonomous driving.
“The leading semiconductorcomponents distributor dedicated to the Asia-Pacific market, Dalianda Holdings, announced that its subsidiary Youshang has launched an outdoor 400W-IP67 waterproof LED power supply solution based on ON Semiconductor’s NCP13992/NCL2801.
“
Dalianda Holdings, a leading semiconductor component distributor dedicated to the Asia-Pacific market, announced that its subsidiary Youshang has launched an outdoor 400W-IP67 waterproof LED power supply solution based on ON Semiconductor’s NCP13992/NCL2801.
Figure 1 – Dalian Dayoushang launched the Display board of the LED power solution based on ON Semiconductor products
In recent years, with the accelerated construction of smart cities, outdoor LED lighting products are gradually replacing traditional lighting products and become mainstream applications in public transportation infrastructure such as roads, bridges, tunnels, and airports. As an indispensable part of LED lighting products, LED drive power has also grown rapidly in scale driven by the market. Compared with traditional lighting products, LED lighting products have the characteristics of high luminous efficiency, stability and durability, dimmable light, and easy control. However, because LEDs work outdoors for a long time, the living environment is relatively harsh, so the requirements for their power supply are becoming more and more stringent.
The LED power supply solution launched by Dalian Dayoushang this time adopts ON Semiconductor’s LLC driver NCP13992, synchronous rectification driver chip MPS6922 and PFC driver NCL2801, which meet the IP67 waterproof level and can provide stable performance even in harsh environments. .
Figure 2-Scenario application diagram of the LED power solution based on ON Semiconductor products launched by Dalian Dayoushang
The NCP13992 is the industry’s first LLC controller in current control mode with built-in 600V gate drivers, simplifying layout and reducing external component count. And it adopts skip cycle mode to improve light-load energy efficiency, and integrates a series of protection features to improve system reliability. It is used in LED power supply and industrial power system applications, which can significantly achieve high energy efficiency and ultra-low standby power consumption at light load and full load. .
The NCL2801 is a current mode critical conduction (CrM) power factor correction (PFC) boost controller IC for analog/pulse width modulation (PWM) dimmable LED drivers. The device optimizes Total Harmonic Distortion (THD) performance while providing maximum system energy efficiency under wide load conditions, with less overshoot/undershoot during startup and dynamic loads, enabling the best linear level sensing Optimized loop gain control. And the NCL2801 also integrates an error amplifier, which facilitates loop design and reduces power consumption.
Figure 3 – The block diagram of the LED power solution based on ON Semiconductor products launched by Dalian Dayou Shang
Core technical advantages:
• high efficiency; • No auxiliary power supply, quick start; • X2 capacitor discharge function; • Input power factor close to 0.99; • Low power & overload protection; • Temperature protection function; • Output control in any case; • Excellent load and line regulation; • Can be designed for a variety of working modes; • The lowest no-load loss OLED display, voice broadcast, Chinese-English switching, etc.
Program Specifications:
• Input: AC100C277V; • Output: 12Vdc/30A power 360W, ripple • Standby power consumption is less than 135MW; • 230V rated input and output 25%, 50%, 75%, 100%, load average efficiency 92.11%.