In the company’s earnings call, Tesla CEO Elon Musk revealed that the company is building its own computational hardware which will be used in the future instead of sourcing computational units from semiconductor suppliers. According to Musk, the new hardware will exhibit “an order of magnitude improvement in the frames per second” and called it the world’s most advanced computer designed specifically for autonomous operation. While the current NVIDIA’s hardware can do 200 frames a second, Tesla’s hardware can do over 2,000 frames a second and with full redundancy and fail-over.
Tesla started the development process by surveying all of the solutions that were available for running neural networks, including GPUs and then working with what Tesla wanted for the future. The outcome of the development process was a design that’s significantly more efficient and performs at a much higher level than what is currently available.
One way Tesla managed to make the chipset more performance oriented was by designing it for a specific set of computational needs. Such an approach, at one extreme, involves designing a chipset specifically for a particular neural network architecture. This would mean that the flexibility would be limited.
This is probably the approach that Tesla took to make the hardware more efficient and performance oriented. It could be that Tesla designed a chip meant for a very specific type of machine learning architecture.
Though Tesla would get a huge advantage now, it would mean that there is higher risk of being unable to leverage any new deep learning breakthroughs discovered in the coming years.