← Back to topic list

Tesla claim it would soon be releasing FSD model 10 times the parameter

Realm__X | 2025-09-11 15:13 | 154 views

As a Machine Learning Researcher, I strongly doubt that it is the case... either they decided to scrap previous promise regarding backward compatibility of previous vehicles, and having a much higher end chip set, Or they decided to perform some sort of quantization that allow for the same resource to run a similar but larger model at lower resolution (and also lower speed) or they came up with a ground breaking architecture in FSD that is different from current one, utilizing similar amount of resource. None of which seem highly likely for me. Edit: I just realized that tesla has already announced in 2023 that Upgrade from HW3 to HW4 would be needed for long term FSD support.

Comments (51)
noobgiraffe 2025-09-11 15:24

This never made sense. Even if they quantize from 16 to 4 whatever the bottle neck was only gives them 4x increase. Where is the rest coming from? Also if quantization doesn't effect the results why didn't they do it ages before? It's like first step to try when running on limited spec hardware. In the recently released video musk claims their performance bottleneck in HW4 is softmax and they improved it in HW5. That makes no sense. What kind of model architecture you would have to be using to bottleneck on softmax? It's always matrix multiplication that is the bottleneck. I think he is purposefully vague and it's only 10x on a single layer. Maybe the input layer and not the entire model.

Realm__X 2025-09-11 15:25

right. He might be playign with words.

habfranco 2025-09-11 15:26

Lol so that’s what the stock is pumping today. What a scam.

Jaguarmadillo 2025-09-11 15:27

What in the ketamine drug fuelled fever dream is this drivel? Fuck you Elon

mrbuttsavage 2025-09-11 15:28

> I think he is purposefull vague and it's only 10x ona single layer. They publish perception changelogs that are like "10x improvement in (some niche metric)". Which is all well and good in like Jira, but pushed to a customer changelog it's definitely meant to invoke "wow 10x better". So basically I wouldn't be surprised.

Lopsided_Quarter_931 2025-09-11 15:29

What does that mean?

JRLDH 2025-09-11 15:29

Hahahahaha. I can’t believe that people are so easily duped.

beren12 2025-09-11 15:30

Often called lying

noobgiraffe 2025-09-11 15:31

Interesting fact hidden in the his recent pump interview was that it implies that FSD14 will be released by the end of the year. It was supposed to be released in september. Hype matters more than actual delay though so stock is up 4%.

Engunnear 2025-09-11 15:41

It means that fElon has strayed into an area that the OP knows, thus the OP has suddenly realized that fElon is an idiot.  Not the first time this has happened, nor will it be the last.

Intelligent-Rest-231 2025-09-11 15:43

Everything with dingus is 10X bro!

kleingordon 2025-09-11 15:47

That's boomers for you

Lopsided_Quarter_931 2025-09-11 15:52

Yeah that’s a well known effect but I’m more interested what parameter mean in the space

DreadpirateBG 2025-09-11 15:53

Maybe they are going to finally embrace modern technology that is LIDAR and radar and such to work with their vision system. Which if they did that 5 years ago they would be the leader. But I highly doubt Elon would admit to a mistake

Engunnear 2025-09-11 16:01

It means how many data points the system is tracking for everything in its view. For a given object, the system might classify it as a vehicle, assign a position, estimate a motion vector… that’s half a dozen parameters for one object. The OP has caught on that the only way to multiply what’s being tracked by a factor of ten is to use more processor and memory (impossible without added hardware) or to decrease the quality of what’s being tracked (futile, if it’s even achievable).

ArchitectOfFate 2025-09-11 16:08

To put it briefly, a variable that the model manages internally. Values are assigned to these parameters during training, and they control how information propagates through the neural network. Classic parameters are frozen when training is complete and do not change without re-training, so to use a more well-known term: once in the hands of the end-user, they're the coefficients that cause the network(s) to behave the way it does.

dtyamada 2025-09-11 16:25

They want it to sound impressive to the lay person but in reality, it's unlikely to significantly improve the performance.

Lando_Sage 2025-09-11 16:35

You know nothing you scribe. Elon has already figured it out, he's just waiting for the tech to catch up, 5D chess, never bet against Elon. /s

bobi2393 2025-09-11 16:36

"or they came up with a ground breaking architecture in FSD that is different from current one, utilizing similar amount of resource." Perhaps they're utilizing some of the lessons from DeepSeek's innovations, like DeepSeek R1's Mix-of-Experts architecture. Not sure if or how they'd apply those lessons to FSD (maybe different "experts" for freeways, dense cities, rural roads, and sparse neighborhoods), but other major AI companies certainly seem to have done that.

[deleted] 2025-09-11 16:42

[removed]

Unfair_Cicada 2025-09-11 16:48

Maybe buyers for aiming to pump it to 8.5T. Stock market near term is a voting game. All We need is a good story.

[deleted] 2025-09-11 17:16

Quantization does decrease precision by design. The idea is that the truncated weights aren’t impactful on the output. Re softmax, it wouldn’t surprise me that Tesla’s in house chip design has an atypical bottleneck. I don’t know their NPU architecture but it’s clear that their model architecture is diverging from the original design of their processor.

Realm__X 2025-09-11 17:21

This reply is more accurate. u/Engunnear (while also being very helpful) put a bit too much criticism on quantization -- that is generally well accepted technique for improving model performance given same hardware -- even though single calculation performance does certainly decrease, it is generally considered to be a rule of thumb that a larger parameter count quantized model using same resource in real-time can perform better than a smaller parameter count unquantized model. Though at the cost of slower computation (or lower response frequency).

SisterOfBattIe 2025-09-11 17:27

Does musk know how many parameters are needed to see a pedestrian when the sun is blinding the cameras?

Realm__X 2025-09-11 17:31

No amount of parameter can compensate for that; but only if all camres that could capture the pedestrian is blinded/disabled/inhibited in some manner; Though considering tesla's lacking redudency in sensor coverage, this is much more likely to happen than in many other vehicles providing similar drive assistance capabilities.

Engunnear 2025-09-11 17:41

Layman’s terms and nuance don’t always peacefully coexist.

wowcoolr 2025-09-11 18:12

yes hw4 has a big untapped upside that is being revealed

wowcoolr 2025-09-11 18:14

duped? have you driven a hw4 tesla? did you see megapack? so its late, so what- its not fake

JRLDH 2025-09-11 18:22

The sooner you realize that Elon views you as an absolute fool, the better for you. He’s been lying about these fantastical magnitude improvements for years and I am truly fascinated that people like you exist.

vampyr01 2025-09-11 19:03

Corporate puffery* for the rich in the US.

k-mcm 2025-09-11 19:13

There's no way to know. I doubt Elon has the attention span to learn what it really means from his AI team. It's likely that this change was made possible by lowering the precision elsewhere. Even if not, specification boosts don't scale linearly. What AI really needs is improvements in architecture and training technology. Take a look a the Tesla robots and decide if you think they have that.

vilette 2025-09-11 19:14

Musk is always rounding up to the next power of 10, could be 2 times

ionizing_chicanery 2025-09-11 19:24

HW4 only has twice as much memory as HW3. So I have a hard time seeing how they increase the parameter count by 10x over a model that was already too big for HW3. The hardware was already only int8 so I doubt they have that much room for quantization.

ionizing_chicanery 2025-09-11 19:27

Calling the activation function a bottleneck is extremely dubious...

Chris_0288 2025-09-11 19:51

Elon clearly likes to just use buzzwords to sound intelligent but really anyone truly expert in the field would smell bullshit. The general public however would be impressed by hearing “10x parameters”. When pushed on a subject, just like that twitter leaked chat audio he just flips out and cries.

MUCHO2000 2025-09-11 21:08

Bro. They solved middle out FSD. Don't be mad.

CareBearOvershare 2025-09-11 21:54

>either they decided to scrap previous promise regarding backward compatibility of previous vehicles It's this one.

[deleted] 2025-09-11 22:41

Q: are Tesla's cars autonomous? A: are drivers in the Vegas Loop? It's the same picture.

pacific_beach 2025-09-11 22:57

Never gonna happen because the scam is working as-is.

egowritingcheques 2025-09-11 23:14

Why do this in 2025? Full self driving was solved in 2018.

RosieDear 2025-09-11 23:39

Boomer like myself have known 100% that he is lying badly for at least 5 years. Anyone familiar with how business works knows he was lying. When he said full self-driving within 3 quarters (2020?)....it seems to this boomer there are only these possibilities. 1. He completely pulled it out of his butt. Not even a speck of reality. 2. His Software/Hardware team responsible for this lied to him...Elon is dumb, but it's hard to imagine he is THAT dumb that he wouldn't have SOME idea of where they were in the process. I try to think of a 3rd possibility like "everyone else knew it was possible but Elon himself might actually have believed it".....but it's hard to grasp that. I have to say #1. He did it for money (stock, etc.).

xjay2kayx 2025-09-12 01:05

Elon also recently bragged about Grok being 3x better than their competitors because they pushed 3x more public releases of their mobile app than their competitors. The mobile app is just a frontend shell that connects to their backend.

nlaak 2025-09-12 02:32

> duped? Yes. Elon and Tesla have repeatedly lied about most relevant issues for years. >have you driven a hw4 tesla? Why would I want to?

mishap1 2025-09-12 04:30

2016 based on that video they posted.

Murky-Service-1013 2025-09-12 20:52

Investor note: 10x the GAY SEX parameter (batteries not included)

davidwitteveen 2025-09-12 23:05

Tesla's product isn't cars. Tesla's product is stock prices. If their stock price ever adjusted to reflect their actual performance as a manufacturer, a lot of people would lose a lot of money. So Elon's real job is to blow hot air into the bubble.

hardsoft 2025-09-12 23:58

Did they actually say 10 times, and not "an order of magnitude more"? Find it hard to believe

habfranco 2025-09-13 06:48

100% agree. Every decision they make is connected to this. Like for example initially putting the safety driver in the passenger seat (to then putting it quietly in the driver seat later). I’m sure they have a PR calendar connected to stock price supports/resistances. Or also when execs plan to exercise their options.

Apartment-Unusual 2025-09-13 10:32

Buy the rumour, sell the news. They just keep moving the goalpost further… to be in an eternal rumour scenario.

XKeyscore666 2025-09-13 14:50

As we saw with GPT5, more parameters doesn’t automatically mean better.

azguy153 2025-09-14 01:48

The best driving systems available today are SAE level 2. Full Self Driving is Level 5. They have a long way to go.

Add comment

Login is required to comment.

Login with Google