Toshiba Electronic Devices and Storage Corporation (‘Toshiba’) has developed an image recognition SoC (System on Chip) for automotive applications. The company says that the new SoC implements deep learning accelerator at at a speed that is 10 times and has a power efficiency that is 4 times that of its predecessor.
SoC provides image recognition capabilities to help advanced driver assistance systems (ADAS), like autonomous emergency braking systems to recognize road traffic signs and road situations at high speed with low power consumption. With consumers expecting more from the vehicles of the future in terms of safety and technology, such ADAS features need increasingly advanced capabilities.
In the new SoC, deep neural networks (DNN), algorithms which replicate the neural networks seen in our brains, perform recognition processing more accurately than conventional pattern recognition and machine learning. It is expected that they would be highly useful in automotive applications. One drawback with DNN-based image recognition with conventional processors is that it takes time, as it calls for numerous multiply-accumulate (MAC) calculations. DNN having conventional high-speed processors also need too much power. Toshiba has tackled this issue with a DNN accelerator that implements deep learning in hardware. It has three features.
• Parallel MAC units. DNN processing needs many MAC computations. Toshiba’s new SoC has four processors, and each of them has 256 MAC units. This boosts DNN processing speed.
• Reduced DRAM access. Conventional SoC lacks local memory to keep temporal data close to the DNN execution unit. It requires a lot of power to keep accessing local memory. Power is also used up for loading the weight data, which is needed for the MAC calculations. In the new SoC, SRAM are implemented close to the DNN execution unit. The DNN processing is divided into sub-processing blocks to retain the temporal data in the SRAM, thus reducing the need for DRAM access. Toshiba has also added a decompression unit to the accelerator. Weight data, which is compressed and stored in DRAM in advance, are loaded through the decompression unit, thus reducing the power consumption needed to load weight data from DRAM.
• Reduced SRAM access. Conventional deep learning needs to access SRAM after processing each layer of DNN, and this requires too much power. The accelerator on the other hand has a pipelined layer structure in the DNN execution unit of DNN. This facilitates a series of DNN calculations which can be executed through one time SRAM access.
According to Toshiba, the new SoC is compliant with ISO26262, the global standard for functional safety for automotive applications. The company will continue to working on ramping up the power efficiency and processing speed of the developed SoC and is slated to commence sample shipments of ‘Visconti 5’, the next generation of Toshiba’s image-recognition processor, in September 2019.
Nissan launches first-of-its-kind Patrol 8 Adventures series in the Middle East
Biannual Automechanika Dubai Network gathers regional automotive experts to highlight the role of remanufacturing in the circular economy
Mercedes-Benz VISION EQXX, the Record-Breaking Icon, to Showcase at LEAP 2024 in Riyadh, Saudi Arabia
CZINGER VEHICLES GROWS ITS INTERNATIONAL FOOTPRINT AS IT PARTNERS WITH AL HABTOOR MOTORS FOR DISTRIBUTION OF ITS GROUNDBREAKING 21C IN THE MIDDLE EAST
FIRST BESPOKE LIMITED EDITION IN INDIA CURATED BY BENTLEY MULLINER
© 2023 Tires and Parts News Resource. All Rights Reserved.