Elon Musk says idling Tesla cars could create massive 100-million-vehicle strong computer for AI — ‘bored’ vehicles could offer 100 gigawatts of distributed com

Elon Musk says idling Tesla cars could create massive 100-million-vehicle strong computer for AI — 'bored' vehicles could offer 100 gigawatts of distributed com

“One of the things I thought: if we got all these cars that maybe are bored… we could actually have a giant distributed inference fleet,” Musk said.

Obviously, plucking numbers from the air, the Tesla boss went on to optimistically project that this fleet could expand to, say, 100 million vehicles, with a baseline of a kilowatt of inference capability per vehicle. “That's 100 gigawatts of inference distributed with power and cooling taken with cooling and power conversion taken care of,” Musk told the financial experts on the earnings call. “So that seems like a pretty significant asset.”

Meanwhile, users will probably be more concerned about their bought and paid for vehicles being used for someone else’s advantage, perhaps using extra electricity, and their computer systems enduring longer heat stress, and so on. There’d probably have to be a clear benefit for end-users to incentivize them to sign up to such a compute power-sharing scheme.

Follow Tom's Hardware on Google News , or add us as a preferred source , to get our latest news, analysis, & reviews in your feeds.

Mark Tyson Social Links Navigation News Editor Mark Tyson is a news editor at Tom's Hardware. He enjoys covering the full breadth of PC tech; from business and semiconductor design to products approaching the edge of reason.

hotaru251 , the Tesla CEO pledged, “I am 100% confident that we can solve unsupervised full self-driving at a safety level much greater than a human.” so how many times for how many years did he say FSD on gen 1 tesla? Then said it was just bravado and nobody should think he was for real? users will probably be more concerned about their bought and paid for vehicles being used for someone else’s advantage, perhaps using extra electricity, and their computer systems enduring longer heat stress, and so on. it would likely be in the terms when you buy the thing so they dont have to pay anyone or give benefits. you either accept it or you dont get the car. (and they are betting on people interested in tesla to not care about it and just sign) Reply

bit_user How much memory do they each have? That would seem to be the most immediate limitation, since it would restrict model size. I'd be quite annoyed if my car started getting worse mileage, or used extra power when plugged in, unless I both had discretion over whether & when the inferencing happened and was compensated for it. Also, just because it's distributed doesn't mean it's not taxing the same grid as data centers. Sure, not everywhere is stressed equally, but some of those cars will be located and charging from networks that are already under strain. Reply

Rabohinf Fortunately, many of us have evolved to the degree we'll never need or use an electric vehicle. Reply

USAFRet Elon – Why hasn't the same 'distributed compute power' already emerged with the hundreds of millions of PCs around the world? Reply

bit_user USAFRet said: Elon – Why hasn't the same 'distributed compute power' already emerged with the hundreds of millions of PCs around the world? It's a good point, but here's where I think the question about memory size enters the picture. CPUs don't have enough inferencing horsepower and dGPUs don't generally have enough memory for inferencing on the kinds of models I think he's talking about. So, if whatever new Tesla self-driving chip has enough memory – and we know they have oodles of compute power – then he might have at least a superficial argument. Reply

USAFRet bit_user said: It's a good point, but here's where I think the question about memory size enters the picture. CPUs don't have enough inferencing horsepower and dGPUs don't generally have enough memory for inferencing on the kinds of models I think he's talking about. So, if whatever new Tesla self-driving chip has enough memory – and we know they have oodles of compute power – then he might have at least a superficial argument. For HW4, the current system. "The custom System on a chip (SoC) is called "FSD Computer 2". According to a teardown of a production HW4 unit in August 2023, the board has 16 GB of RAM and 256 GB of storage" https://en.wikipedia.org/wiki/Tesla_Autopilot_hardware ———————————————— Tesla Intel Atom (MCU 2) and AMD Ryzen (MCU 3): Feature Differences and How to Tell What You Have https://www.notateslaapp.com/news/2417/tesla-intel-atom-mcu-2-and-amd-ryzen-mcu-3-feature-differences-and-how-to-tell-what-you-have Reply

bit_user USAFRet said: For HW4, the current system. "The custom System on a chip (SoC) is called "FSD Computer 2". According to a teardown of a production HW4 unit in August 2023, the board has 16 GB of RAM and 256 GB of storage" Thank you! Okay, so yeah. Anyone with >= 16 GB dGPUs of the past couple generations should be applicable for whatever he's talking about. Not "1 kW of inferencing horsepower", but some significant fraction of that. Reply

vanadiel007 Computing on a vehicle will never work properly unless they can solve the issue of connection speed with the vehicles. There's a reason why we have data centers: all the units are connected with each other using super fast interconnect speeds so they can act "as one". Having to wait until a Tesla uploads the result using Starlink will simply take too long, so the "network of cars" will in reality only be a network of a single car. Reply

Key considerations

  • Investor positioning can change fast
  • Volatility remains possible near catalysts
  • Macro rates and liquidity can dominate flows

Reference reading

More on this site

Informational only. No financial advice. Do your own research.

Leave a Comment