Tesla is dropping Nvidia from it's self-driving tech, and yes it's a huge deal. #blogpost

Okay, it’s 2018, and by now we had a huge number of Tesla crash videos. Whether it’s the drivers fault for not paying attention or it’s the system failing, those are always pretty big news. You see articles being released before the driver even made it to the hospital. But have you ever wondered what kind of tech is behind the self-driving cars in general?

What do self driving cars use to process their data?

To understand self-driving cars you first need to understand graphics cards. To understand graphics cards you will need to understand computers.

A basic, daily usable computer has 5 parts. It has a motherboard, which connects all important parts, a central processing unit (CPU), working memory (scientifically called “Random Access Memory” or RAM for short), a hard drive where you keep your data and a graphics unit that displays all of that data to your monitor. Graphics unit is located on your CPU. That spec is cheap but poor for any gaming/video editing/content producing unit. Those computers need to have an additional part. A graphics card.

Graphics cards

You can think of graphics cards like mini computers inside your computer. They have their own processor - graphics processor unit (GPU), their own working memory (Video Random Access Memory or VRAM for short) and their own board.

So what’s the difference? Well, CPUs are more power efficient, and for daily tasks such as reading news, writing a document simply put - better. CPUs have usually from 2 to 32 (on extreme level) cores. Most “enthusiast” CPUs don’t even have more than 8 cores. GPUs on another hand, like the GTX 1050 ti - a budget card that is available to pretty much everybody has 768 CUDA cores. (CUDA cores aren’t really comparable with normal cores, but to keep it simple I will use this comparison).

Great right? Well no. GPUs have lower clocks. The worlds most powerful gaming graphics card has a GPU with only 1.6 GHz. CPUs go as high as 5.0 GHz. So while you can do fewer things with a CPU, CPU will do those things, better. You don’t actively use 768 google chrome tabs, do you?

That’s why graphics cards, with their lower clocks, but far more parallel operations, are better for rendering graphics, doing complex mathematical calculations and, you guessed it - self-driving AI.

How big of a deal is it?

To explain how big of a deal is it, I’ll list some prices. Nvidia GTX 1080 ti has a retail price of 700$. Nvidia Titan V has a retail price of 3000$, but wait, there’s more! Nvidia Quaddro GV100 has a retail price of 7000$! Not a single card crosses 1.6 GHz mark. To put it into perspective, Rimac said that they use a combination of PUs (processing units, so a combination of GPUs and CPUs) in Rimac C_Two that total over 19 GHz!

Imagine how much money would Tesla save building their own GPUs. Not only that - Nvidia is the only company that makes GPUs powerful enough for self-driving AI. That also means Tesla has to share their tech with Rimac, Porsche, Audi…

Sure Intel and Samsung have started developing their own GPUs and graphics cards, but those won’t be out and ready for a long time, and you can’t port your work from Nvidia to another architecture very easily.

Could future gaming PCs use Tesla graphics cards? Probably not, but it’s an interesting thing to think about.

Sponsored Posts

Comments

Freddie Skeates

Great post

08/03/2018 - 10:44 |
0 | 0

Thanks

08/03/2018 - 11:31 |
0 | 0
TheMindGarage

Interesting and impartial read! But my concern is that Tesla is biting off more than it can chew again. Making your own GPU is a difficult ask while you’re trying to make a lot of other stuff. Depends if they have the expertise within their company. I suppose they could ask another company to custom-build GPUs for them (as Koenigsegg did for tyres for example).

It’ll be interesting to see if quantum computing becomes viable - this could be a big turning point in doing “computationally difficult” things which self-driving requires.

08/03/2018 - 10:46 |
9 | 1
Jakob

When you are doing machine learning, you are always using the GPU for calculations. We have some deep learning PCs at the university, these basically have shit-tier CPUs but tons of RAM and great GPUs. All of them are Nvidia GPUs, for a good reason - because deep learning frameworks are far better optimized for Nvidia than for AMD. Story time: When the whole maching learning thing became popular some years ago, Nvidia invested a lot of money into that, and it all paid off in the end. AMD only jumped on the bandwagon eventually and now they have to make up ten years of technologic disadvantage in that area. As such, Nvidia GPUs are by far the most widely used for machine learning.

I can see the appeal for Tesla to design their own chips and make their software extremely hardware-based. But dropping Nvidia GPUs, which are the de-facto industry standard, is a bold move. I don’t think they will save any money in the process; quite the opposite. But low-level hardware programming is extremely efficient. Obviously these chips will only be efficient for the exact application they are designed for. We will see how it pays off for Tesla. I think they should have bigger concerns than that right now, to be honest.

08/03/2018 - 11:06 |
3 | 0
TheMindGarage

In reply to by Jakob

They might save money in the long run, but in the short-term, it’s a big development cost. Which worries me because that’s where they’re struggling.

08/03/2018 - 11:23 |
1 | 0
Tomislav Celić

In reply to by Jakob

Yeah I agree. If Teslas future was nothing to worry about, so they had the time for that, it would be great. But the question is if Tesla can live long enough to make this thing worth anything

08/03/2018 - 11:33 |
1 | 0
Lootwig | Galant Lover

Wow. I think this was not the smartest move Tesla could have made.

Pls send me your GPU

08/03/2018 - 13:30 |
0 | 0
Jefferson Tan(日産)

Hmmmm…..

08/04/2018 - 05:47 |
0 | 0

Why do I picture you having 768 chrome tabs open?

08/04/2018 - 05:58 |
0 | 0