← Back to topic list

Interesting news regarding FSD(Supervised)

EverythingMustGo95 | 2026-03-11 18:19 | 107 views

FSD Safety Metrics Are Falling Apart Here's where the bear case gets technical and alarming. Analyst Gordon Johnson of GLJ Research flagged that Tesla's Full Self-Driving (FSD) safety metrics are "sharply deteriorating." The specific number that should concern investors: the "city miles to critical disengagement" metric for FSD v14.2 dropped to 809 miles from a peak of 4,109 miles with v14.1. For context, Waymo achieves 30,000 miles before removing safety drivers -- nearly 37 times better than Tesla's current FSD performance. A new federal NHTSA probe into Tesla's FSD system is also underway, adding regulatory risk to an already complicated autonomous vehicle story.

Comments (95)
BranchLatter4294 2026-03-11 18:21

Where's the link to the source?

InvisibleBlueRobot 2026-03-11 18:23

this references it. [https://finance.yahoo.com/news/tesla-crashes-18-why-wall-173940033.html](https://finance.yahoo.com/news/tesla-crashes-18-why-wall-173940033.html) * Tesla (TSLA) reported full-year 2025 net income of $3.794B, down 46.79% year-over-year, while vehicle deliveries fell 16% in Q4 2025 and 9% for the full year despite global EV market growth. * Tesla’s Full Self-Driving safety metrics sharply deteriorated, with city miles to critical disengagement dropping to 809 miles in v14.2 from 4,109 miles in v14.1, compared to Waymo’s 30,000-mile standard. * Tesla (TSLA) reported full-year 2025 net income of $3.794B, down 46.79% year-over-year, while vehicle deliveries fell 16% in Q4 2025 and 9% for the full year despite global EV market growth.   Tesla’s Full Self-Driving safety metrics sharply deteriorated, with city miles to critical disengagement dropping to 809 miles in v14.2 from 4,109 miles in v14.1, compared to Waymo’s 30,000-mile standard, while an NHTSA probe into FSD adds regulatory risk. To be fair this analyst is heavily down on TESLA. He seems to hate tesla, which doesn't make him wrong. His price target for Tesla stock is about $25 or $30 share. He is down 50% since making his sell recomendation. [Analyst research site link here](https://glj-research.com/sector/tesla-inc-tsla/)

BigMax 2026-03-11 18:37

\> deliveries fell 16% in Q4 2025 and 9% for the full year despite global EV market growth It's wild to me that Tesla stock is doing well for so many reasons, but that one is the biggest. They are shrinking quickly, in a market that is GROWING quickly. It would be like a pickleball paddle company having decreasing revenue over the last 5 years. Even FLAT numbers an the EV market should be considered a failure, much less big drops.

EverythingMustGo95 2026-03-11 18:40

Thank you. As you said, he can be down on Tesla and still have the correct info.

itsJonathanRN 2026-03-11 18:42

Where are these FSD metrics coming from?

Queasy-Bed545 2026-03-11 18:58

Maybe not quite alarming at this juncture, but it does bring into question the stability and predictability of the neural network evolution. Everyone, myself included, is used to the premise that software-driven products, minus a few inconvenient bugs to fix here and there, simply get better over time. Here the data suggests a potential corrosion of its core functionality: safety. I haven't been following FSD very long but even I noticed what seem like strange regressions given how bulletproof the product seems at times. It definitely needs to have Tesla on full alert though. Doesn't matter how incredible your product is if you can't say it won't randomly try to kill you.

patsj5 2026-03-11 19:16

Lol, thought it was 4 bullet points and realized it's 2, duplicates

ErmaGherd12 2026-03-11 19:39

it’s elon’s political polarization driving a lot of it — i see this as a temporary blip; there’s an information asymmetry in the market right now wrt “up to date info” on EVs in general, ubiquitous charging infra across US, and, the biggest one, autonomy… most US drivers are not exposed to any of this and once that info gap closes, Tesla vehicle values themselves will experience the same “short squeeze” type action experienced by the stock previously… it’s just going to take time there’s too much value to be had from these vehicles and Tesla ecosystem… people will justify / rationalize the bias in their minds

Fishbulb2 2026-03-11 19:42

No ones cares. Stonk to the mooooon!!! /s

EverythingMustGo95 2026-03-11 19:51

I like this reply. I’ve worked in software development for decades and I can bring a little perspective here… Yes, software is expected to get monotonically better with each release. This is because with every bugfix I was expected to update the QA test suite to verify the improvement. If a test fails (that used to pass) because of my change it is alarming. Sometimes it was a test that shouldn’t have passed before - I must update that test too. But this reflects badly on Tesla, did they ignore new testing fails? Or were there edge cases they hadn’t tested that got worse? It happens, but as you said safety is a BIG issue. Was it Wally Schira who said his rocket ship was built by the lowest bidder? His point was safety concerns don’t always get top priority.

EverythingMustGo95 2026-03-11 19:53

Stink to the moon? Haven’t you heard, Elon is taking his stink to Mars!

Pangolin_farmer 2026-03-11 19:53

“Autonomy” that requires constant supervision from the driver is a gimmick. 90% of TSLA’s market cap is gimmick and unfulfilled promises right now.

hpass 2026-03-11 19:53

Irrelevant, Elon said march of the 9s has begun.

Fishbulb2 2026-03-11 20:04

Dang autocorrect. I actually should have kept it.

Queasy-Bed545 2026-03-11 20:29

FSD is more than a gimmick.  But imo you’re right that it lacks a sound monetization scheme to support its valuation, even if they did “solve” autonomy.  Taxi service is riddled with operating costs other than drivers and it would still have to compete with public transit and car ownership.  Subscription service is a niche upon niche market being limited to affluent Tesla owners with modern, capable hardware suite.

MikeDFootball 2026-03-11 20:36

the day that tesla bothers apply for a permit in CA is the day i will be interested to see what they cooked up. until then, its more empty words.

Queasy-Bed545 2026-03-11 20:36

I’m not a software person but the company ethos seems to be product rollout is testing.  Not to say that they completely ignore regression or QA testing but they seem to prioritize getting things out and finding out what they find out later.

ionizing_chicanery 2026-03-11 20:44

teslafsdtracker.com It's an unofficial crowdsourced data platform. Tesla could release much higher quality data... but they don't. Almost certainly because the data doesn't look good for them. This is one reason why I'm super skeptical that FSD unsupervised is coming any time soon if ever.

Ok_Cake1283 2026-03-11 20:50

Imagine reading Gordon Johnson posts and thinking it's representative of facts.

HurrDurrImaPilot 2026-03-11 21:01

XAi, xitter, and spacex investors will all ultimately bail out Tesla in a maneuver that gives Elon more control.

Secret-Revolution172 2026-03-11 21:06

But yet the stock is up. Ass backwards

THATS_LEGIT_BRO 2026-03-11 21:15

I disengage all the time. Rarely because of safety reasons. Sometimes it’s weather and I’d rather drive myself. Sometimes it’s a road with known potholes. Sometimes I just want to pass someone.

za72 2026-03-11 22:02

it did one thing well, then they applied that model to the real world and it's slowly collapsing under it's own weight of non applicable code to scenarios the original code was never meant to address

dw-c137 2026-03-11 22:04

I am curious from that data set how they are defining a critical disengagement. Since 14+ my disengagements have spiked, but it's virtually all to report an incorrect speed limit. My safety related disengagements are almost definitely down. I'm at 12k+ miles on 14.2+ and almost every disengagement is to report an incorrect speed limit, does that data set differentiate that? If not my critical disengagements are absolutely multiple times higher than any other release, but it's had far fewer "near misses" per 10k miles. Having a complete and accessible data set would be nice, this is a time an official data set might help Tesla.

G-T-L-3 2026-03-11 22:15

Tesla's FSD is now a victim of GIGO. (Gargabe In Garbage Out). How many millions of miles do you need to train a car to drive on the road? I mean after a couple of million miles it should have learned by now. Any additional million miles will not teach it anything new. In fact, they are now trying to catch edge cases but these edge cases also contain bad driving especially now that the cars are cheaper and you have a different demographic (younger, less safe). I don't know how "city miles to critical disengagement" metric went down so much though so it could be other factors--like elon releasing unsafe code to the public. Which wouldnt surprise me

RosieDear 2026-03-11 22:44

Garbage in, Garbage out - Tesla came up with most of their schemes (code) before the hardware and the coding for vastly better and faster Neural Networks was commonplace. It's pretty simple to say it this way. If Tesla had a very good system, the improvements at this point would be extremely quick and accurate. And yet, their stuff doesn't even seem to match up to a Filipino labeling video clips. Something is very wrong. Tesla does not have a modern system doing this..... Now - IF they did, they would still have the problem of being stubborn on the camera thing....but that shows the depth of the problems. You can't get there from here.

RosieDear 2026-03-11 22:54

I am completely amazed that every single person who keeps up on tech does not know what you said. Even the folks here with experience in software - unless it is in the last couple of years and involves 10s of thousands of GPUs with the known software matching up, it's irrelevant. This isn't Windows. If one does not realize the true revolution just happened....then they can't even look at it critically. I often say that Tesla is the tech company for people who don't know tech, because it's very evident that little or no "machine learning" is being done. Even worse, little or no "human learning" seems to be done either. Folks are saying FSD shouldn't have hit and disregarded a railroad crossing 2x8 properly market barrier! Imagine that. Many simlar problems such as it doesn't see crossed yellow caution tape - a KNOWN physical barrier marking...... It seems to me that Tesla never even downloaded the free libraries of ALL known Traffic signs (there really aren't many of them) from the Government and installed it into the foundation of their system. What excuse could they possibly have for not putting a basic library into this? In general "as above, so below" and "tip of the iceberg" - have proven true. Even when things seem to be so ignorant as to be impossible to believe.....I have seen things be that bad. When someone can explain to me why Tesla doesn't have the government library of a couple 100 road indicators in it, I'm all ears. But, in any case, the "value" they supposedly are selling is their "training" which simply does not work. You shouldn't ever have to "report" a problem. That's silly. I can only imagine what a fella like Steve Jobs would say if he walked in at this moment and looked at Teslas hairballs of supposed Neural Networs.

za72 2026-03-11 22:56

I agree, the entire approach is wrong... but Musk is too stubborn and too proud to "pivot" to a different approach because it doesn't fit his vision of a Jetsons inspired future... he's making real world decisions based on 60s cartoons

pcJmac 2026-03-11 23:07

For anyone trying to compare a neural network to traditional programming, they are two completely different animals and not at all the same. You do not “fix a bug” when working with a neural network (as if it were something you could locate and alter yourself directly). You adjust the weights of the inputs necessary to achieve the desired outcome that you did not get from the current weight settings for the billions of inputs. Will it work? Hard to tell. A “test suite” doesn’t really exist for this sort of thing beyond a random sampling of data that the model has not yet seen for it to try its new response against. But can you really run every scenario in the world to test what might have become broken in the process of adjusting the weights to favor this one new particular outcome? Well, sort of, whenever you release a new version I guess, because that’s what’s happening. Everyone becomes a beta tester and it would literally be impossible to have it any other way since this is how Elon is attempting his AI solution for self driving cars. I’ve explained this in more detail in another thread. I can copy it here if anyone is interested in the harsh reality.

Queasy-Bed545 2026-03-11 23:10

That doesn’t seem to be the case as the regression is difficult to situationally distinguish from things it does well all the time.

reddddiiitttttt 2026-03-11 23:14

Tesla’s stock is doing well because they make great products that have 100x upside. You can fault FSD, but no one else is doing what they are doing in a production vehicle. Their app integration is second to none and none of the big auto makers are remotely close in that experience. People invest in Tesla because they drive one and it blows them away. Then they think about which car will people use in the future and it’s pretty hard to deny people won’t want a car that drives itself. You can look at the metrics all you want. Tesla has defied them for decades. They are doing incredibly stupid things for a car company. They are hurting their revenue doing it and Musk needs to just STFU, but that being said the optimist position is that Tesla is now an autonomy company. They are going to flail for a while and there is a not insignificant risk that they will fail hard, but that’s also been true for Tesla’s entire life. This is a company that defies logic and the odds, but manages to built revolutionary products in multiple ways that appeal to a wide audience.

reddddiiitttttt 2026-03-11 23:20

Sure but if a taxi service was significantly cheaper than owning a car and more convenient, they will displace car ownership in addition to monopolizing the taxi market. Cities are key here. They are ripe for disruption. Imagine the massive savings if you never had to pay for parking or a car in the city. They can get rid of all street parking. There is certainly more costs to taxi service than just humans, but those other costs are scalable. They get cheaper with volume. Tesla is following a very risky strategy, but the money is there for sure. If they are as successful with autonomy as they were with defining the EV market, they will grow into their valuation.

za72 2026-03-11 23:20

It could park itself, then they expanded it well if it can do that it could drive itself no?!?

Queasy-Bed545 2026-03-11 23:22

Well I am interested. I have a Tesla with FSD so I’m invested in that regard.  I bought it being fairly bear about it ever getting to a safe, unsupervised product though yet alone something truly marketable.

tech01x 2026-03-11 23:26

Lol.

Queasy-Bed545 2026-03-11 23:27

I am a little skeptical about this metric though.  Presumably Tesla’s number includes disengagements from all sorts of drivers rather than trained and employed safety drivers like Waymo?  Also what is a critical disengagement and how is it judged? I often disengage for situations that frankly just give me the willies.  Would we have crashed? We will never know, but it’s just not worth my mental health to find out.

Queasy-Bed545 2026-03-11 23:32

The problem is scale. This is why mass transit is the answer but we are too stubborn to accept it.  You would need a taxi fleet the size of the car population to accommodate peak travel periods.

Razzputin999 2026-03-11 23:37

It would certainly help. I drive home on US route 20 a lot and it’s always thinks the speed limit is 20mph at one place (it’s actually 40). OTOH, the database solution isn’t 100% accurate either. Everything depends on drivers having common sense.

reddddiiitttttt 2026-03-11 23:45

That presumes we have good, complete data. We do not. We don’t know if those disengagements are because the software is now more cautious and safe or if it’s putting itself into more dangerous situations. Like does it disengage now because it sees a pothole it doesn’t like which is a thing it didn’t use to do. A disengagement is also not particularly dangerous as the car just slows to a stop. It’s not randomly trying to kill you, just the opposite, it’s defaulting to safety. The thing you should be concerned about is the opposite, a precipitous fall in disengagements without a similar improvement in capability. Disengagements mean you need a driver, but It doesn’t mean you have a more dangerous system. We just don’t know what it means for safety. Software gets exponentially more difficult to maintain the larger it gets. The one and only reason you think software gets more reliable over time is because the successful ones dump exponentially more money into QA as the user base expands. I am a software consultant and my smaller customers who don’t have full time software engineers have software that gets worse as users grow and databases scale. They use it in ways it wasn’t designed for and scale makes things harder. For my larger clients, development pace slows to a crawl as the code base and requirements expand. Machine learning is also a whole other beast. It’s extraordinarily hard to identify root causes with it. The general fix is to just train it more on the problematic scenarios, but you rarely know exactly why it failed.

dw-c137 2026-03-11 23:47

Edit: I misread and thought the comparison was 13 to 14 when the speed limit feature was changed, which would have correlated to increased disengagements from speed limit issues, but it was 14.1 as the best performing so my thinking was incorrect. Edit: I'm not getting the strikeout option while editing on mobile so my original comment that is incorrect is in italics. *Given V14 introduced a very different level of urgency in reporting and acting on incorrect speed limits vs all previous versions where you just adjusted max speed but it didn't count as a "critical disengagement", it absolutely would not surprise me that disengagements are up, honestly it's surprising it's not significantly more up, mine are definitely up more than that statistic, but it's almost all reporting a bad speed limit*🤷🏻‍♂️ I wish the author had explained what they consider to be a "critical disengagement." If they don't also have access to the user recordings or vehicle telemetry from after a disengagement how are they determining if it's critical or a bug note or non safety navigation error? Tesla shares the blame for not allowing analysis of their data and we end up with stats that are fairly useless 🤷🏻‍♂️

reddddiiitttttt 2026-03-11 23:52

Not really. If every car was autonomous you would have no traffic, no stop lights. You could serve more people with less cars on the road. You can also ride share. You can adopt hub and spoke models and buses can be autonomous too. You can stack and pack autonomous EVs too as you don’t need to leave space for a driver to get in and you don’t need to park in close proximity to where pickups and drop offs happen.

BlackSheepInvesting 2026-03-12 00:03

Also, the issue is not time. It's compute. Today you have AI data centers literally worth 10s of billions of dollars and they can't even solve this problem. And yet somehow Elon Musk thought that he could solve this with a few room's worth of 2016 era GPUs? Tesla today likely has on the order of 1000X the amount of compute available, multiple generations of FSD computers, and STILL can't even come close to the performance of a new teenage driver. They were bragging about how many miles of data they had, and now Elon Musk says actually they need 10B miles of FSD-driven data. Oops. Here's an old article from 2016 with some really funny quotes: [Tesla’s driverless advantage over Google, Uber, Ford: 1.3 billion miles of data – The Denver Post](https://www.denverpost.com/2016/12/25/tesla-driverless-advantage/) The chasm between today's compute and 2016 compute really puts in perspective how badly his prediction aged. It was clearly not done in good faith. The issue isn't that it's taking more time, the issue is that even with 10 years and like 1000X the compute, and literally $B's/yr in research costs, they are still struggling to get to 2016 Waymo levels of reliability. That is just damning beyond words. The level of stupidity baked into that whole narrative is just mind boggling.

CrossingChina 2026-03-12 00:09

Their app integration is second to many in China. They certainly don’t have the best in car software here either. They are a middle of the road product with a brand name people recognize. That’s the only thing keeping them afloat here currently, their actual products aren’t competitive.

practicaloppossum 2026-03-12 00:15

Isn't the cited change in disengagements comparing v14.2 versus v14.1? Or are you saying that the stricter response to speed limits was introduced in a point release (namely 14.2)? I agree with you that it's hard to draw conclusions without having a clear understanding of what we're comparing.

Queasy-Bed545 2026-03-12 00:20

What are you talking about? Traffic lights exist because of right-of-way conflicts and have nothing to do with autonomous driving.

dw-c137 2026-03-12 00:22

I misread, you are correct! Given how many things change from version to version it's not a specific enough definition

reddddiiitttttt 2026-03-12 00:23

Traffic lights exist because they are the simplist way to communicate with human drivers. Autonomous vehicles ca communicate wirelessly and can be relied on to always follow the rules. You would never have a circumstance where you at a red light and can’t trust the intersecting traffic to not stop for you if every car was autonomous.

tanbyte 2026-03-12 00:35

Elmo scratching his arrogant bald head: ‘Yea, why do we need LIDAR again?’

dw-c137 2026-03-12 00:35

Any definition of "critical disengagement" in terms of a takeover by a Tesla driver would be good. Is every disengagement of a non-professional Tesla driver for any reason whatsoever the same critical as a professional safety driver's? Since 14 I have disengaged multiple times as frequently as previous versions, but it's almost entirely to be able to send an "incorrect" speed limit message.

Queasy-Bed545 2026-03-12 00:36

Sorry, no.  Traffic lights establish right of way.  Even if you could directly communicate there would be no structure to determine who should have the right of way. Every intersection would be a negotiation.

RosieDear 2026-03-12 00:38

The failures are real. FSD is a major failure. Boring Company= Failure Solar Company = Failure (never even 1% of the US Market). Optimus = laughable Twitter=Failure (Musk predicted 27 Billion in Revenue, now 3 Billion...DOWN from 5 when he bought). AI - he had the biggest chance of anyone with OpenAI - but Altman walked away and Musk sent him an email saying Altman had a "zero percent chance" of success w/o him, etc. You really have to look at Reality and say those are failures......it's an improper narrative to say "oh, he's just a little late" when we can apply actual metrics to all of these. His biggest success is in figuring out where Government money (our tax dollars) come from and harvesting it - and, of course, the Stock "Con" which was based on promised of Robo-Taxis in 2021.

nlaak 2026-03-12 00:41

>Tesla’s stock is doing well because they make great products that have 100x upside. You can fault FSD, but no one else is doing what they are doing in a production vehicle. FSD is a L2 autonomous system. There are L3 and L4 systems in the market already. Systems that are properly engineered and tested, unlike Elon let-the-customer-test-it Musk. > Their app integration is second to none and none of the big auto makers are remotely close in that experience. Hate to burst your bubble, but the large majority of people who buys cars have a primary interest in... driving their car. Tesla the worst quality metrics of any major manufacturer. > Then they think about which car will people use in the future and it’s pretty hard to deny people won’t want a car that drives itself. Tesla cars will never fully drive themselves. Their technology stack is fundamentally flawed, as evidenced by they have zero interest from other manufacturers in licensing FSD - which Elon pushed as a major profit point for years. >Tesla has defied them for decades. When they had the only game in town, sure, but there are dozens of cars that are much better than Tesla across the board. >that being said the optimist position is that Tesla is now an autonomy company. Even Elon doesn't believe that bullshit anymore. > They are going to flail for a while and there is a not insignificant risk that they will fail hard Why would this be true, if they're so great and everyone that drives a Tesla wants one? You can't even be coherent in your fan-boyism. >manages to built revolutionary products in multiple ways that appeal to a wide audience. Except they don't any more. Sales are dropping like a rock worldwide. Sure, led by the Nazi, but their products are bland and stale.

reddddiiitttttt 2026-03-12 00:44

Yeah, China looks like a real threat, but I haven’t seen any myself. What US market car has a better infotainment and app experience? Some of the in car UIs are a bit better, but I haven’t seen any that get both that and the app experience right. Most feel a decade behind my 8 year old Tesla. They had the best selling car. I have yet to meet a Tesla Owner that was disappointed in the product. I’ve driven many cars, many I really liked. None could I imagine owning after my Tesla.

nlaak 2026-03-12 00:44

> it’s elon’s political polarization driving a lot of it Sure it is, but their cars are bland and stale. >i see this as a temporary blip Who cares how you see it? >ubiquitous charging infra across USc The only thing Tesla has going for it is their charging network. > the biggest one, autonomy Not even close. The interest in autonomy is a lot lower than you think. Even among Teslas, when people still believed, the uptake rate was never that good. >there’s too much value to be had from these vehicles and Tesla ecosystem That's cute. There's no value in Tesla vehicles or ecosystem. There are a lot of cars that eclipse the Tesla models, especially from China, that are going to eat them alive, and they gave away their only selling point, the Supercharger network, by letting everyone in. >people will justify / rationalize the bias in their minds You almost get the point, but clearly don't realize you're talking about yourself.

nlaak 2026-03-12 00:47

> XAi, xitter, and spacex investors will all ultimately bail out Tesla in a maneuver that gives Elon more control. Maybe, but it doesn't matter, Tesla is going to drag them all down. What use does a space company or an AI company have for auto factories making no cars? Elon would be better off rolling AI and SpaceX together, though xAI is about as useful as xitter is, and letting Tesla just fade away. He won't, because his ego won't let him, he need to feel that everyone loves him.

nlaak 2026-03-12 00:49

> FSD is more than a gimmick. It's really not. A camera only model is flawed in every way, and their training model appears to be more hack and patch than it is actual engineering. >imo you’re right that it lacks a sound monetization scheme to support its valuation, even if they did “solve” autonomy. True wide open autonomous driving is a LONG way off, and even "solved" it's not a trillion dollar idea, any more than the Tesla robot is.

pcJmac 2026-03-12 01:01

Sure. This was in response to an announcement regarding v14 lite and its likelihood of success. It’s long but informative… A lot of people here who REALLY don’t understand how AI works making some very bold predictions about how they think v14 lite will be when (if) it comes out (and v14 itself if it ever reaches true FSD — spoiler alert — it won’t). Elon has a shit show on his hands, having lost his best AI talent to other companies or them building their own startups. And now the realization is starting to sink in that his hw4 hardware is also insufficient to produce reliable FSD. Restructuring of the AI models into multi-phase models will simply introduce more “competition” between the sub-models for control. We’ve already seen this in current hw4 implementations when they obviously tried once already to implement this architecture unsuccessfully and it produced various back and forth anomalies as systems competed for control. Simply put, AI takes a lot of processing power and its data must be coherent. This is easy in a single phase system as there is no way for data not to be coherent (it’s all processed in one go). But start breaking it up into little subgroups of the same type of work and try to put the results through a second time and you’ve not only dropped your response time by at least half (if not more) but you’ve now introduced new pathways for conflict that weren’t present before. Further, once you commit to an architecture, all of your training data must also be trained for that architecture and only that architecture. The way to do it is the way that Nvidia has architected it by dividing the work into two phases of DIFFERENT types of work. First phase, segmentation (object recognition) followed by the second phase, scene interpretation. By dividing the work this way, you get clear indications of whether that flashing red light belongs to a stop light or a bus BEFORE its purpose is interpreted. In Elon’s world, he’s trying to interpret raw light patterns and make sense of them among billions of other inputs. Now don’t get me wrong — it’s truly impressive what this path was able to accomplish, but unfortunately, it’s just going to fall woefully short of the finish line when it comes to that last 1%. It would take an actual miracle to get the weights adjusted properly for the billions of inputs to properly code what needs to happen for every possible situation reliably. And don’t get me started on the things FSD does that people think are good which are actually quite dangerous (and are just accidents waiting to happen). But segmentation is a much easier and more reliable first step to provide AI with a much better first source of data. Once it can reliably “see” all of the objects in a scene (using whatever suite of sensors deemed necessary), an LLM can pretty much verbally describe what needs to happen (and you can verify this in realtime). And crucially, these 2 layers can be modified, improved and updated independently. The tangled mess of Elon’s approach cannot. And this is why each release from now until the end of (Tesla’s) time (or until Elon adopts the Nvidia approach) will continue to improve one area of FSD while sacrificing another. And this is also why nobody wants to license Elon’s half-baked approach as it will never make it across the finish line — and he will stick with his failed assumptions long after he has been lapped by the competition because that’s just the type of guy he is!

CrossingChina 2026-03-12 01:04

USA doesn’t really have much EV competition as far as I’ve seen. US is basically a closed market a decade behind the rest of the world at this point and only falling farther behind

JSchmeegz 2026-03-12 01:10

IMO the biggest problems Tesla is the empty promises and grossly missed timelines. They have promised full autonomy for what seems like 10 years…. The M3 saved them. They came out with a great affordable car…. The MY is right up the same alley. Then promise a $50K truck an deliver a 80-100K truck, well above the inflation rate. 4680 were supposed be a great improvement and instead kinda seems like more of the same. Other companies are catching them, and possibly surpassing them in battery tech. Tesla went from a failing luxury niche company to one providing for the masses, and seem to be going back to the luxury niche market once again for whatever reason. I actually support Elon but his goals are simply unrealistic and I am losing faith in almost everything he says no. Full circle…. More empty promises.

pcJmac 2026-03-12 01:17

And just to piggyback off of one of your points: If Elon truly thought he could license his FSD to other manufacturers, why wouldn’t he wait until it was complete and fully approved for autonomous use on the road? Unless, of course, HE KNOWS IT WILL NEVER BE READY!!! (as this is as good as it’s going to get so he will try to sign up as many suckers as possible — for both manufacturing licenses as well as customer lifetime subscriptions).

Lacrewpandora 2026-03-12 01:39

>I have yet to meet a Tesla Owner that was disappointed in the product.  "Going through life with blinders on, it's tough to see. I had to get up, get out from under and look for me." - Linda Lavin, 1976

MarmotFullofWoe 2026-03-12 01:49

I think you guys need to look outside the US domestic market. China now sells the most cars in Australia, having taken the no.1 spot from Japan.

Quercus_ 2026-03-12 02:19

As I understand it, because they're doing end-to-end AI, there are no bug fixes. They have to try to improve and retrain their model, and when they do it's effectively a new product. If the car is swerving down the road to avoid payment snakes, they can't simply writing code to ignore pavement snakes, have to get a new model and new training data and create a brand new version, with no guarantees that it won't break something else. And no unit testing, because the whole thing works as a black box integrated package.

Queasy-Bed545 2026-03-12 02:31

Why would a camera-only model be fundamentally flawed?

Queasy-Bed545 2026-03-12 02:42

That is fascinating. I’m not an AI guy. I’m a systems engineer so please forgive me ahead of time for any absolutely dumb questions to follow. I guess I don’t see how segmentation necessarily helps the end result. I mean it sounds like a great way to organize and allocate tasks, but at the end of the day you still have to evaluate how the system performs on the road. On the road you’ll have complex interaction of models that you can’t have anticipated and wrung out in tests. When you have a disengagement or worse, it still seems very difficult to diagnose and retrain if you don’t understand the interaction between the segments.

pcJmac 2026-03-12 03:30

Not dumb questions at all. AI is a very magical topic that has very unique rules. The issue is one of context. I saw a video of a Tesla stopping because it “saw a red light” but in actuality it was a hot spot reflection on the back of a red car. But it looked just like a red light (more like the energy source of Ironman but on the back of a car). When you first know that this is a car and not a stoplight, it changes how you interpret that “signal” in that it simply cannot be a stoplight if it’s surrounded by a car and that input is rejected. Further, sensors like LiDAR provide extremely accurate anchors with which to view the world for confirming what the model sees. Once it sees a traffic light in a certain place, it’s unlikely to move so it knows where to expect them to be and not to be. This type of AI can begin to incorporate more traditional programming techniques like algorithms that describe what is okay and what is not okay so it doesn’t need to rely on training in the same way that a pure neural network does. There’s obviously a lot more to it than just this but one key difference that you get from Nvidia is the model’s understanding of the scene which Tesla’s cannot provide. Post mortem, Tesla can show what inputs fired but that’s just raw data that has little meaning to humans without some kind of interpreter trying to make sense of it all (and even then it’s largely guesses and assumptions as even visual AI models don’t see the way humans see — the patterns AI uses to recognize something are completely different and unrecognizable to how humans identify things). In contrast, segmented data can actually be described for humans to show what the model is thinking at every moment so you know immediately if its interpretation of a scene is incorrect — it’s basically using a built-in LLM to manage the operation. Tesla has to apply their loose interpretation of the data after the fact. Finally to address your question of interpretability. Yes, it is still hard, but the ability to make incremental progress is much better when you have isolated the two functions of segmentation and interpretation because the two ideas are not intertwined and can be evaluated and optimized separately. All Elon can do is feed a video example of undesired behavior back into his system with the request to adjust the weights so that this does not happen next time and hope that it will work without adversely affecting performance somewhere else. Nvidia on the other hand, has many more options at its disposal on how to address a performance failure depending on its nature. Did it fail to recognize an object? Did it put it into an incorrect context? Is there a rule that could be applied in cases like this? Etc… Again, it’s very difficult to convey why things are the way they are without having to learn all about AI and I’ve obviously simplified many concepts for this write up but hopefully this explanation sheds a little more light on the topic.

pcJmac 2026-03-12 03:41

Red light car: https://www.reddit.com/r/TeslaFSD/s/6PR9Ut3Ecp

Queasy-Bed545 2026-03-12 04:51

“the patterns AI uses to recognize something are completely different and unrecognizable to how humans identify things).” That seems to create a problem with the whole concept of driver supervision. Or perhaps the disengagement metrics. I mean physics is physics so at some point you crash or run the red light but how am I supposed to proactively supervise it if we don’t perceive things the same? You essentially train a teenage driver to drive the way you drive and so I assumed the process of training an AI was trying to copy what’s going on in the human driver.  Guess I’m having a hard time figuring out how you train or judge an AI model when you don’t know what it sees or what it’s thinking.  But thank you for the engaging conversation and explanation. It’s nice to see the internet isn’t always a waste of time.

Queasy-Bed545 2026-03-12 05:04

🤯

pcJmac 2026-03-12 05:21

I think I may have inadvertently confused that concept a bit — AI HAS to see things in this way so as to avoid a pixel for pixel match for identification. Kind of like the sum of the parts equals the whole (and it’s recognizing each of the parts as contributing to a collection of signals that together, mean you are likely viewing a “number 3” for example more than any of the other nine possible digits.) Here’s a Medium article with good screenshots taken from a much deeper 3blue1brown video (some of my favorites) that explains this introductory neural network topic in much better detail. You might be able to get the concept with a quick perusal of the article but the 20-minute video gives a lot more of the underlying math behind it (you’ll still be able to appreciate the ideas even if the concepts can sometimes be a little tough to follow). Medium article https://medium.com/@aisgandy/how-neural-networks-learn-to-recognize-handwritten-digits-the-intuitive-way-38074a59fb88 3blue1brown video https://www.3blue1brown.com/lessons/neural-networks

dontletthestankout 2026-03-12 06:10

Luxury? Lol nah

Withnail2019 2026-03-12 09:37

It's never worked, it never will work.

BurtMacklin-FBl 2026-03-12 10:11

How does this great idea of yours works when you add in pedestrians?

ionizing_chicanery 2026-03-12 10:47

SpaceX has already purchased xAI. Though I hardly see the synergy there either. Space based AI data centers are a fool's errand.

ionizing_chicanery 2026-03-12 11:01

Disengagements are being done by the driver, not FSD. The fact that the system isn't aware of when it's about to do something stupid is the entire problem. It's not foolproof but the platform where drivers track this does have data on why the driver says the disengagements happen. And it's usually because the vehicle was about to hit something or drive off the road. Note that the < 2000 miles number (and ~800 miles for city driving) number is for *critical* disengagements, or those the driver felt were necessary to prevent the car from doing something dangerous. Non-critical disengagements are like 50x more frequent and are being done because FSD is stupid about things like lane preparation and navigation. While less problematic this also contributes somewhat to FSD's deficiencies and these frequent non-critical disengagements could be masking needs for critical disengagements. Higher quality data would have trained supervisors that don't perform non-critical disengagements. BTW, the critical disengagement rate for a true unsupervised vehicle is never because you can't safely disengage it. Instead it becomes "crash rate" and once every 2000 miles or 800 in the city is far too frequent to be viable.

Upstairs-Pea7868 2026-03-12 15:07

It’s not iterative dev anymore is the problem. They are using AI. Like any AI, it’s less refinement and instead “maybe try again with more training data, and a different seed?”

ImprovementJust7634 2026-03-12 15:42

Tesla cars will slowly die off. The main reason they have been so successful is because they started so much earlier than everyone else. This makes their software superior. That said the chinese have caught up or surpassed in hardware and possibly even the software. Others will catch up and surpass Tesla. With Elon at the helm Tesla is done. Many top people are already gone from Tesla due to Elon. Consumers are leaving or choosing something else as more options hit the market and wint buy anything from Elon. Elon knows it and that is why he is shifting to Robots and space.

nlaak 2026-03-12 16:19

> Why would a camera-only model be fundamentally flawed? Not sure if you're being deliberately obtuse, ignorant of the hundred articles talking about it, or a blind Tesla supporter, but I'll bite. Without a (massively powerful) infrared projector, cameras are visible light only. They're obscured by fog, heavy snow, rain, and most importantly the sun itself. A camera does not (in any way that's even remotely reasonably good enough) provide distance. Without sufficient resolution, it has a hard time providing good direction and speed between images, at least without a lot of averaging. Tesla cameras don't sit far enough apart to provide decent stereoscopy or parallax. For the significance of the need of good data, any decent *engineer* is going to use sensor fusion (which hilariously, Elon shit on, either because he doesn't understand it, or knows *his* push to switch to camera only was stupid). Radar, lidar, cameras, and (possibly) ultrasonic sensors all have various ideal and less than usage cases, that overlap, letting the system weight the data based on current conditions, and order the value of the data accordingly. Any properly designed system with multiple disparate of sources of data will always be better than a single sensor type in resolving ambiguity.

EverythingMustGo95 2026-03-12 16:31

No mention of $7500 rebates? Or selling clean air credits to other car companies?

Queasy-Bed545 2026-03-12 16:52

Yeah, that’s all great but it still doesn’t explain why you can’t drive a car with only cameras. Any closed loop control system is going to be limited by the observations. You can always make the case to have more or better observations, but that doesn’t mean the ones you have can’t reasonably do the job. The conclusion that any ambiguity must be resolved is a conclusion based on certain assumptions about how to operate a vehicle.  I’m not really an Elon Musk fan, but you really haven’t dispelled his central argument that drivers have eyes and manage reasonably well. Humans cannot see everything. We cannot, for example, see when the sun is on our eyes but we can drive slower, use a sun shade, look away, or even fashion a guess on what's there based on what we expect is happening.

Queasy-Bed545 2026-03-12 17:03

I also love 3Blue1Brown. It was tremendously helpful when I needed a crash course on quaternions. I think your statement and explanation makes sense based on the description of the digit ID model's intermediary layers doing things that don't necessarily translate to the way we think we identify a 3, for example.

a4xrbj1 2026-03-12 20:26

Do you know how Xpeng's vision only is addressing this problem? It seems to perform a bit better or it's not equally under scrutiny as Tesla's botched approach is. It all started to go wrong with Lidar being dismissed by Elon, just because his people weren't able to sort out the problem on how to handle two systems (or three) that are delivering contradicting results. Also, as you seem to know a lot about Nvidia's approach. This original response you wrote is some time ago (before FSD 14) and in the meantime Nvidia has launched their own system which is implemented in the new Mercedes CLA. How does their system work fundamentally different (using basically a rule book like a newbie driver would stick to the law and regulations and using a second system that is constantly verifying/checking the conclusions of the sensor fed (Radar, Lidar, Camera etc.) LLM to decide if their next steps are ok or not?

Pr3fix 2026-03-12 22:40

All of your criticisms are equally valid for human eyeballs. But I see a lot of humans driving through those conditions just fine.

failinglikefalling 2026-03-13 01:48

Any modern car.

Due-University5222 2026-03-13 02:34

That is because humans are not end2end neural nets like FSD. I use FSD 14.2.2.5 everyday. I love it. However, I have also have been an engineer for over 30 years. Single sensor modality feeding end2end NN with very limited local inferencing is poor design. I will have this argument with any Tesla engineer. The non-determinism is shocking. The lack of localization is frustrating. It also appears decision making is severely impaired by a lack of HW4 memory.

Due-University5222 2026-03-13 02:38

Wow! No! Please do not confuse mass autonomy to self organizating networks. If EVERY car was autonomous the underlying network would be both chaotic and disruptive to pathogens (a whole bunch of autonomous vehicles reacting to external manipulation).

ImprovementJust7634 2026-03-13 04:25

Those also. Will add to Tesla troubles.

Argon522 2026-03-13 12:23

This is critical disengagements, aka, preventing accidents or illegal manuevers, not 'because I felt like it's ones.

Argon522 2026-03-13 12:25

From the FSD tracker website: Categories of Disengagements: Critical: Safety Issue (Avoid accident, taking red light/stop sign, wrong side of the road, unsafe action). NOTE: These are colored in red in the Top Causations for Disengagements chart on the main dashboard. Non-Critical: Non-Safety Issue (Wrong lane, driver courtesy, merge issue) Edit: to clarify, the article is only talking about critical disengagements.

THATS_LEGIT_BRO 2026-03-13 12:29

Deciding to voluntarily disengage with no anticipated issue foresee. is not a critical disengagement. If I disengage to pull into my garage manually instead of letting it drive into my garage is not a critical disengagement. “Heavy storms coming. I’m taking it off FSD.” Is not a critical disengagement. Hitting the brakes because I thought it was going to run a stop sign would be a critical disengagement.

Argon522 2026-03-13 12:46

Those are specifically separated in the dataset  Taken from the FSD tracker page: Categories of Disengagements: Critical: Safety Issue (Avoid accident, taking red light/stop sign, wrong side of the road, unsafe action). NOTE: These are colored in red in the Top Causations for Disengagements chart on the main dashboard. Non-Critical: Non-Safety Issue (Wrong lane, driver courtesy, merge issue)

pcJmac 2026-03-13 17:45

Without direct knowledge of the Xpeng system, it would be hard to say (or even confirm if it really is better or just different). Training data is critical to getting any AI to understand what you need it to do. Often this means highlighting the critical area that needs attention which may be a secret sauce for a company that, for example, could use simulations carefully curated to depict the training desired. I saw a great example of this recently in another visual domain used for replacing green screen FX. The AI was being trained on samples of green screen footage to “teach” it how to create a compositing matte. The first attempt was okay but not useful for the task. It just wasn’t good enough. But the engineer took the additional step of taking the results, compositing them onto a pink background (to reveal the obviously leftover green pixels) so that another round of training would help fix the issue. A third round composited with a gray background provided the final piece to the puzzle and it worked brilliantly. Now, is there an equivalent that can be done with car footage? Maybe. But it does take a creative mind to come up with these types of solutions and Tesla is just bleeding AI talent left and right as the FSD product continues to flounder. The fact that Elon considers FSD solved isn’t really helping either.

swirlymaple 2026-03-13 18:11

Isn’t it weird? When I studied engineering, the focus was on trying to make everything as well understood and predictable as possible. The better you understand a system, the more reliably you can control its behavior and outcomes, whether it’s a steel bridge or a chunk of code. Now we’ve created systems that are so complex, we have no real control over how they do what they do. And that makes them wildly unpredictable, as well as difficult to modify/improve in a consistent, deterministic way.

a4xrbj1 2026-03-13 19:18

Thanks for the quick reply, understand your point about Xpeng. Unfortunately there isn't much know about it. Maybe to Chinese users but not to us Westerners. Any comment on Nvidia's newest autonomous driving solution?

pcJmac 2026-03-13 20:31

Oh yeah, you pretty much had the Nvidia system right — an LLM communicates the scene to the next stage giving you a more “human like?” control (not sure how to call it). But this LLM in the middle gives a lot more accessibility to what the AI is thinking and allows for more control when the next stage of AI processing (and any override processing) knows what it’s trying to do.

Add comment

Login is required to comment.

Login with Google