Uncategorized

Tesla presents supercomputers for vision-only approach to autonomous driving

The device can be classified in the top ten supercomputers and supports the further development of the “autopilot”. Its new way of working also needs the computing power.

With 1.8 eflops and ten petabytes of NVME memory, the new computer should be able to process 1.8 terabytes per second. Tesla’s AI boss Andrej Karpathy said at his unveiling on Monday that the total number of flops would place him somewhere around fifth in the top 500 supercomputers. Nvidia currently has the Selene cluster, which works with a comparable architecture, but only with 4,480 instead of 5,760 graphics processors. It recently replaced the Microsoft counterpart. A placement can only be made when Tesla’s new monster computer has passed the special benchmarks in order to be included in the top 500 list. At the Conference on Computer Vision and Pattern Recognition, Karpathy explained why he uses all the computing power and what vision is behind it: The buzzword is autonomous driving according to the vision-only approach.

Vision-Only needs very high computing power

The new supercomputer enables Tesla to replace radar and lidar sensors with high-end optical cameras, said the 34-year-old. A computer that is supposed to react like a human needs an immense data set. In addition, a very powerful computer is required to train the neural networks of the autonomous driving systems with this mass of information. Tesla’s super machine collects video from eight cameras at 36 frames per second. The system then labels the insane amount of environmental data – albeit with human help. The new device has already collected a million videos about ten seconds long and recorded six billion objects with depth, speed and acceleration. Tesla needs 1.5 petabytes of storage space for this – that corresponds to around 1.5 million gigabytes.

Advantages and disadvantages of the vision-only approach

Elon Musk has long emphasized the advantages of the method. Cameras work faster than radar and lidar. Therefore, he only has the new versions of the Model Y and Model 3 built in the USA without radar. In addition, the technology does not require high-resolution maps, so theoretically it works anywhere in the world. The pure vision approach is more scalable, says Karpathy, but it also poses the greater challenge. The systems must be able to collect huge amounts of data at a speed that corresponds to the human ability to recognize depth and speed. In sparsely populated areas they already worked very well, while “in very adverse areas like San Francisco” they would definitely have more trouble.

AI recognizes mix-up of accelerator and brake pedals

Karpathy showed some deployment scenarios. This included emergency braking to avoid colliding with a pedestrian and a warning if the driver has not yet slowed down at a yellow traffic light. “Pedal Misapplication Mitigation” is the name of a new function that correctly interprets and intervenes if the accelerator and brake pedals are mixed up. It is currently in the test phase. In the scenarios shown, the supercomputer works with the vision-only approach. According to rumors, Tesla could also feed in data for the new approach via lidar systems – corresponding test vehicles had recently been spotted.

Almost finished!

Please click on the link in the confirmation email to complete your registration.

Would you like more information about the newsletter? Find out more now

You might be interested in that too

Leave a Reply

Your email address will not be published. Required fields are marked *