← Back to topic list

On that "autonomous" cross-country drive in a FSD-equipped vehicle

adamjosephcook | 2026-01-01 22:01 | 140 views

Feels like every year I am basically addressing the same (silly) things, but here it goes. Happy New Year! 1. No Tesla vehicle that is sold to consumers is capable of "driving itself" or operating "autonomously" (however that is defined). If it were, Tesla would not have [these disclaimers](https://www.tesla.com/ownersmanual/modely/en_us/GUID-2CB60804-9CEA-4F4B-8B04-09B991368DC5.html) on their official vehicle owner's manual. And that is really it. Indisputable, I would hope. That is Tesla stating, in the legal fine print that Tesla is using to protect Tesla, that their system is not capable of driving itself. You, the human driver, are viewed by Tesla as the safety layer - not the other way around. 2. From 1, these vehicles are black boxes to all of us. You have no idea what assumptions Tesla is making behind-the-scenes. What Tesla is hand-waving away. What the vehicle is ignoring or responding to any given time. Maybe the vehicle becomes temporarily blinded and is just straight-up YOLO-ing it? Maybe there are things in consumer-owned vehicles that Tesla is ignoring that Tesla cannot ignore in their "robotaxis" that are not sold to consumers? Because there is a human driver in the driver's seat and because Tesla has that legal fine print protecting them... Tesla can take ***wide*** liberties in tossing ***all*** of the risk onto you and onto John and Jane Q Public. The risk is the whole deal in safety-critical systems. All of the economics. Make peace with the fact that you know nothing. These are black boxes. And no amount of FSD "experience" will ever change that. 3. Even in well-managed system safety lifecycles, which Tesla obviously has zero interest in maintaining, there are a myriad of Human Factors risks - the most notable being that, given enough experience with a system, the test operator begins to "trust" the system. Form a mental symbiosis with it. The test operator naturally becomes complacent. Does not even realize it. Starts subconsciously ignoring potential system failures that should be documented and addressed. This is real, continuous risk even if there has been significant effort to read the test operator into the system. To educate and update the test operator on what is in the "black box". With consumers? With this FSD program? Forget about it. It is 100% open-ended. No training. No management of the operator. No management of the vehicle. Deceptive marketing. YOLO. Worse than what went on in the Boeing 737 MAX program. I have watched ***alot*** of "zero intervention" FSD videos over the years where high-profile Tesla Twitterati blew through stop signs and stop lights without even acknowledging it. For years and years. But the "zero intervention" flag is still planted firmly at the top of the mountain. Community-developed "FSD Beta" trackers devoid of any mention of the issues. Just make peace with the fact that this Human Factors issue exists. It is well-documented in industrial safety-critical systems development circles. If one has never worked in an honest safety-critical system development shop, then one is likely unaware of it. The game that Tesla is playing since the very start is that Tesla is trying to craft something passable *enough*, without any quantification of root causes themselves (as that is expensive), to provide the illusion of "self-driving" without having to worry about any of the risk economics themselves. Tesla is trying to exploit that dangerous Human Factors issue that I mentioned ***to Tesla's benefit***. That is not the same thing as a safety-critical systems development program that is robustly quantifying and categorizing failure, understanding root causes, having a frank analysis of their ***whole*** system design and efficiently developing corrective action pathways. ***EDIT***: Remove the link to another sub. Before the edit, that was probably in violation of Rule 9 here on a second reading. ***EDIT 2***: Just a few formatting edits.

Comments (64)
Low-Win-6691 2026-01-01 22:03

I’ve been shitting on that thread and they don’t like it. Put 20 monkeys in a car and with enough tries it will do the same trick lol. Now apparently I’ve got Elon derangement syndrome

vasilenko93 2026-01-01 22:11

The car literally drove for over 2000 miles with zero intervention. What more do you want? Name me another car that can do anything close to that?

adamjosephcook 2026-01-01 22:16

The general public is not experienced in safety-critical systems. How they should be viewed. How they should be developed. As it is rather deep technically and niche. And, when dealing with human drivers, generally, everyone feels that they are an expert. And it is insulting to imply otherwise. "I use FSD daily! How dare you imply that I do not know my own vehicle!" That kind of stuff. Human drivers ***feel*** like they are in full control of every situation and many feel that others on the roadway are the irresponsible parties. Very hard to convince the public otherwise because we have all encountered drivers acting irresponsibly. Very hard to convince the public that they might not know everything at every time.

[deleted] 2026-01-01 22:19

[removed]

Low-Win-6691 2026-01-01 22:20

You can tell how much Tesla cares by their “Mad Max” mode when even the most conservative driving mode is still dangerous

adamjosephcook 2026-01-01 22:27

Happy to see that you did not read a thing that I just wrote. I did try to make it easy. Put the core issue right at Number 1. Take it up with Tesla. It is written in black and white on their own website. Cannot get any clearer than that.

DominusFL 2026-01-01 22:49

Sounds like the next step is to try it out. You should get a hold of a car and try it out. See how it may affect your opinions.

CautiousToaster 2026-01-01 22:52

How we define “autonomous” is really important.

adamjosephcook 2026-01-01 22:56

Read the Number 2 point again. Specifically, the last sentence of that point.

adamjosephcook 2026-01-01 22:58

I agree definitions are important. Especially with safety-critical systems. Especially with unsophisticated consumers designated, by the manufacturer, to “constantly supervise” safety-critical systems that are black boxes to said consumers (which is an impossible ask).

Dolo12345 2026-01-01 23:03

100% if OP was in a HW4 model 3 with v14.2 he’d be like “what the fuck did I just post this shit is game changing and clearly the future”

External_Koala971 2026-01-01 23:05

To over-simplify: L2 is in the front seat, supervising. L3 is in the front seat, watching a movie. L4 is in the back seat, napping. Tesla explicitly labels its current software as "FSD (Supervised)." By definition, any system that requires a human to monitor the road and remain liable for the vehicle's actions is Level 2. Level 4 (like Waymo) means Tesla would assume legal and insurance liability for the drive, which they do not. “We can do L2 everywhere” does not mean L4, and “we had 0 L2 disengagements” doesn’t mean L4. “Tesla can do perfect L2 across the USA but hasn’t gotten regulatory approval for L3” does not mean L3. Definitely a tremendous achievement in that they’ve delivered L2 in most ODDs. Think the fact that it took them 10 years to get to solid L2, and what that means for auto makers just starting (Rivian).

Legal-Actuary4537 2026-01-01 23:16

2000000 miles without intervention and it is getting close to consumer ready.

External_Koala971 2026-01-01 23:21

Autonomy has already been defined: https://www.jdpower.com/cars/shopping-guides/levels-of-autonomous-driving-explained If you don’t like those definitions, you can use this: 1. Supervised = not autonomous, not self driving 2. Unsupervised = autonomous, self driving

Engunnear 2026-01-01 23:33

I’ve said it more than once, but it bears repeating in this venue: a “black box” whose operating parameters are generated by an AI that may not produce the same functional results from one run to the next can **never** be compatible with a robust safety-critical system design. If the designers can’t say definitively how and why the system is making what superficially appear to be conscious decisions, then there’s no way they can responsibly accept liability for the operation of that system.

adamjosephcook 2026-01-01 23:41

A few notes: 1. Those are not the definitions of the levels defined in J3016. I would recommend [this](https://users.ece.cmu.edu/~koopman/j3016/index.html) resource. 2. One cannot "supervise" a black box. That is impossible. A good example here is flight automation, where we invest ***enormous*** resources in continuously training and checking human pilots... reading human pilots into the system... demystifying the black box to the extent required... so that they may supervise the aircraft system under a safety management system. No such thing in the consumer automotive space. That's why I mentioned the 737 MAX where, in part, that was not done by Boeing... with inevitable results. 3. Tesla FSD-equipped vehicles have, passed down from Musk's deceptiveness, a Schrödinger's cat-like quality. It is "driving itself" when there is a ***seemingly*** uneventful trip. But if the Tesla vehicle crashes, the human driver is summarily excommunicated from the Tesla online community for allowing the vehicle to drive itself. So bad faith, through-and-through, by Tesla. Always has been that way. 4. I would define a tremendous achievement as a system that was maintained under the process described in the last sentence of my post. A robust, economical safety lifecycle developed for a novel safety-critical system. Not tossing something out there with deceptive marketing and a company that skips out on the check when it goes all wrong. That is the laziest engineering possible. The bottom line is... that every single time where human driver situational/operational awareness loss was a root cause in a FSD-related incident... at least the incidents we just so ***happen*** to know about right now... was a human driver that thought that their Tesla vehicle was driving itself at some point in time. Every single one. Then, in a millisecond, their lives changed. No one thinks that it will ever happen to them. And many will get lucky. Some won't. All avoidable.

adamjosephcook 2026-01-01 23:42

Yup. Absolutely.

External_Koala971 2026-01-01 23:50

The problem is in L2 (FSD), the human is a "Passive Monitor." Humans are notoriously bad at passive monitoring; our brains "load shed" when we aren't actively participating. The deception is calling it "Full Self-Driving," Tesla marketing targets user's desire for autonomy, while on the other hand the fine print targets the user's legal liability. As you noted, the moment the "black box" makes a mistake, the human, who has been demoted to a passenger in their own mind, is expected to instantly become the pilot.

adamjosephcook 2026-01-01 23:53

Yup. Agreed. Apologies for misreading your original comment.

External_Koala971 2026-01-02 00:14

https://www.reddit.com/r/TeslaFSD/s/TEVQduy6xy Read the many, many HW4 14.2 issues being experienced right now.

New-Disaster-2061 2026-01-02 00:16

Listen I think Elon is the snakeoil used car salesman that everyone in this sub think he is. That being said and being prepared to be down voted to say that FSD can not drive itself is just dumb and the wrong argument to make. FSD has come a long way and people use it everyday without intervention. I have been in it a few times and pretty impressed. FSD can 100% drive itself. The argument is how safe is it. I have always thought where Elon screwed up was getting rid of Lidar and radars in the cars for extra information. I think if they kept testing with them Tesla would have a fleet of robo taxis and be eating Waymos lunch but Elons dumb ideas got in the way. All that to say FSD can drive itself but all signs point to Tesla knows it is not at a point yet where they would feel safe to insure. So the question is how exactly safe are they. The truth is we really don't know. We do know Tesla is screwing around with their safety numbers and not reporting fully which suggests not good but the real interesting thing and maybe telling thing to me is stall of the Austin launch. There is either two answers: they are waiting for more confirmation that the program is ready to be let loose or they need to produce more cars to scale. Both these make no sense as if they needed more confirmation they would put out more cars to get more data and adjust and producing cars is no problem for them. So if they are not willing to place more taxis with safety operators on the road that is telling me they know they are still not close. I would not be surprised if the cyber cabs are not made with radars and lidar.

adamjosephcook 2026-01-02 00:25

I feel like I have addressed the "drive itself" thing both in my original post and in [this follow-up comment](https://www.reddit.com/r/RealTesla/comments/1q1gg43/comment/nx5w63m/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) elsewhere on this thread. If there is an "argument" on how "safe" that something is... then, objectively, the system cannot be safe. If a manufacturer puts a whole bunch of legal disclaimers in black-and-white in their official owner's manual that directly contradicts their marketing... [that expressly states that the Tesla vehicle is not "autonomous"](https://www.tesla.com/ownersmanual/modely/en_us/GUID-E5FF5E84-6AAC-43E6-B7ED-EC1E9AEB17B7.html#GUID-4EE67389-5F55-46D0-9559-90F31949660A)... then, objectively, it means that Tesla's own internal knowledge of the system differs in some unknown way with what consumers believe. That Tesla wants consumers to believe in order to sell vehicles and pricey add-ons. I cannot make it any clearer.

I_Am_AI_Bot 2026-01-02 01:33

Point 3 is very insightful and not well aware by the current FSD users. I am not sure this is a proper analogy, but I think this is like hiring an assistant to help you manage things in your daily life. In the beginning, you would keep observing him/her and see how he/she performs. When time goes by, the trust is built up, and you become more relaxed to pass him/her more important things. You now dont know how exactly to manage the things you passed to the assistant and totally rely on him/her to do that. All of a sudden, the assistant withdraws all your money in the bank since you gave him/her all the power. Your lawyer then tells you that you have signed an agreement with the assistant not pursuing any loss due to what he/she might do to you and you have no right to claim anything back.

nolongerbanned99 2026-01-02 01:40

I bet they don’t know but I also bet waymo does about their system as they have robust sensor input. Tesla is a joke masquerading as a legit automaker.

adamjosephcook 2026-01-02 01:54

Skill erosion is a term commonly-used in both industry and in the public. In industry, the "skill" is viewed as a progressive failure of the test operator to reliably detect potential, unhandled system failure. It needs to be actively managed. And, in robust system safety lifecycles operating under a well-executed Safety Management System... it is. There are several strategies. Namely, that the detection failure is ***itself*** part of the feedback loop into safety lifecycle - as it will also inform the systems developers to what degree downstream participants (say, the human driver of the consumer vehicle in this case) will be able to handle system failure (or not).

Large_Complaint1264 2026-01-02 03:43

This is a weird fantasy you live in

[deleted] 2026-01-02 04:38

[removed]

SuccotashDesigner274 2026-01-02 05:03

Get out of here. This whole subreddit is a cult & you guys should be ashamed.

Ascending_Valley 2026-01-02 05:12

I work with similar models retrained from time to time. A test suite and prediction metrics don’t reveal inner workings but can isolate regressions, albeit with some margin of uncertainty. Tesla definitely runs their models through a simulated environment and can look for divergence, both expected and otherwise. Not saying they at autonomous ready, the public models don’t seem there, but iterative models can be ratcheted forward. However, tuning out undesired infrequent behavior can still be very challenging. Out of distribution behavior may be more volatile from version to version.

Plus_Boysenberry_844 2026-01-02 06:08

I think that you are describing Shohei and his experience with his translator. lol

Plus_Boysenberry_844 2026-01-02 06:11

Thank you for the thoughtful statement. Agreed. FSD is not ready. It may not be ready for some time. Hopefully by 2045 it will be ready because I plan on sleeping in my car in that year.

demaraje 2026-01-02 06:14

If you've seen Cernobyl, you remember the scene where RBMK design is explained. Why they used a coolant that can blow up (water) and a moderator (graphite) that can catch fire. Usually you only use one of them, to reduce the risk. Moscow had water-water reactors (VVER) which were WAY safer. Why? It's CHEAPER.

adamjosephcook 2026-01-02 06:35

Indeed. The issue with Tesla, as I have often argued on this sub and elsewhere, is more fundamental. There are two major components here as I see it: 1. Given, effectively, the size of their ODD when it comes to consumer-available FSD... there is no chance that Tesla can actually establish the ***full*** pathways of failure for any given scenario, assuming that the unsophisticated human driver bothers to report it. Tesla is simply too far from the field. And that is why, undoubtedly, Tesla moved the goalpost again on the program in adopting a limited-vehicle, Austin-based ODD (instead of the previous goalpost of FSD-equipped vehicles having a wide-open ODD for all Tesla vehicles ever sold after a certain software update). Tesla knows that they don't know. Tesla got cold feet. 2. As I mentioned briefly at the end of my post, Tesla (and Musk) are simply not going to be interested in a frank assessment of the ***whole*** system at any point in time - even in response to new failure information. Musk is driven by cost reductions, supply chain snags during COVID-related shortages (to remove ultrasonics and Radar) and boxed in by prior, desperate promises on the sensor suite featured on Tesla vehicles (when Lidar unit costs were much higher than they are today). So, Tesla tries to paper over it with software and on-vehicle compute. Just hoping that given enough time, natural DL/CV advancements and on-vehicle compute will square a good enough circle to keep enough of the bad headlines away. It is really quite sociopathic, frankly.

I_Am_AI_Bot 2026-01-02 07:33

lol yes, or that of Nicolas Puech and his financial advisor. Before being ripped off, I think both of them would say Ippei Mizuhara and Eric Freymond were great, trustworthy, helpful, and are level 4 assistants until they suffered a big loss from them.

sjh1217 2026-01-02 10:26

You clearly haven’t driven in one lately. They are absolutely undeniably able to drive themselves. My fsd usage is like 99.5% of my miles.

[deleted] 2026-01-02 11:50

The Boeing comparison is spot on. There were engineers once at the top but greedy idiots pushed them out and ran it literally into the ground - killing others KNOWINGLY for profits. Just HOPE they might not die! STOCK UP! Jail is too good for murder committed. They knowingly accept people die. How is it any different from poisoning every 1/10.000 sodas and HOPE nobody drinks that bottle? It's only a matter of time! That's not a acceptable risk! Zero has to be the measurement for success. Not "luck"

Engunnear 2026-01-02 12:30

Why? Because we can actually define the areas where Tesla falls short of offering a true, viable autonomy product?

torokunai 2026-01-02 14:35

I've got the FSD v14.2.2 free trial and am going to have it drive me to work in about 30 minutes. I have ~5,000 miles of attended FSD in my car over the past 2 years and it's worked "pretty" well, with significant caveats regarding relatively low-IQ lane selection and a systemic oversight to not dodge potholes and other crap in the lane ("you had ONE JOB!!" ...). Elon knows getting FSD is the "Hail Mary" he needs to keep the growth story alive now that he's abandoned mass-market BEVs (part of the earlier "+50% CAGR" growth narrative was that most of the 20M/yr unit sales would be cybertaxis since no way can Tesla scale to that level of customer service, that's ~2X Toyota scale and Elon is not a people person like that). I'm in CS and did Andrew Ng's "ML" course 10 years ago so I kinda know what Tesla has been doing with ADAS. I honestly don't know if they'll have it rock-solid reliable by 2030. I give them a 50% chance I guess, and can scale that down to more tighter deadlines. Obviously it was 0% for 2025, contrary to Elon's promises from last July.

Designer-Salary-7773 2026-01-02 14:51

For me it’s pretty simple .. Tesla cannot even manage the LV tunnel environment … which, from a deconfliction and nav perspective would be orders of magnitude simplified as compared to public road/highways. Full stop.  Game over

DominusFL 2026-01-02 16:07

The argument doesn't make sense to me. When I'm a passenger in an Uber or any car that someone else is driving, it's like a black box to me. I don't know what the driver is thinking or how they make decisions. I don't think it's necessary to understand someone else's decision-making process as long as you trust their abilities. Therefore, I suggest that the author spends some time using a vehicle with Full Self-Driving (FSD) features to see if it changes their opinion. It might not alter their viewpoint, but as someone who currently owns an FSD vehicle, I believe it could.

DominusFL 2026-01-02 16:08

Have you spent any time in a Tesla with FSD lately? It's definitely an experience everyone should have before forming an opinion about it. After trying it, it's perfectly fine to conclude that it's not ready yet or to have other concerns. But without trying it, opinions are based on assumptions and possible misinformation.

adamjosephcook 2026-01-02 16:21

You don’t have any system safety responsibilities in terms of the operation of your Uber vehicle. You have FULL system safety responsibility in a FSD-equipped vehicle - a relationship between driver and vehicle that is no different than if the vehicle was equipped with no automated driving system at all. Hence, why it is not “driving itself” at any time or under any conditions.

rocketonmybarge 2026-01-02 16:46

When Robotaxi money printers?

That_Abbreviations61 2026-01-02 18:05

I've had FSD on my model Y since February or so of 2020, and about 140k miles. Got one of the first 1000 or so sold. Joining this sub has possibly saved my life. It has jolted me out of obvious complacense, and reminded me almost daily not to trust this thing. But to proactively Instead, to mistrust this thing. Even to occasionally ridicule and hate on this thing. As OP says, it's super dangerous to be seduced by the black magic and it only takes a millisecond for your life to change at 70 mph. Like Homer's sirens, the promise of this technology is a beautiful song, but I felt it was only a matter of time before my ship crashed on a rock.

That_Abbreviations61 2026-01-02 18:11

Hope I don't live near you 🤞

adamjosephcook 2026-01-02 18:16

Yup. Basically. And to be clear, it is not my motivation here to give Tesla vehicle owners or those that choose to utilize FSD (“Supervised”) a hard time or talk down to them. Just laying out the reality of the situation as I see it. Just reiterating the considerable risks of this system, as designed and as marketed. I see the social media talk lately and it is quite disturbing. I would very much like to avoid, well, avoidable death and injury. Public roadways are dangerous enough.

[deleted] 2026-01-02 19:05

[deleted]

Engunnear 2026-01-03 15:33

My god, that sub is some kind of combination of misanthropic and delusional. That might be the single most toxic sub description I’ve ever seen.

Fun_Volume2150 2026-01-04 08:15

California has determined that they can’t.

AgentSmith187 2026-01-04 22:01

How many times did it do this and how repeatable is it? We didnt have an accident one time is not acceptable in a safety critical system. You need to not have anccident (or need an intervention outside the safety critical system) every time. Tesla fanbois set such a low bar.... I work in a safety critical transport role and every near miss event requires investigation even if it was caught juat in time to avoid an accident. Now try and hold FSD to that standard. Every time it was speeding, ran a stop sign, had a phantom braking event, brushed a curb etc etc etc is a near miss even if the intervention of the human driver avoided an accident. So each one should be investigated and the system modified to make sure it doesnt happen again.

Jaywhatthehell 2026-01-05 01:28

Right! But consider applying those standards to actual humans when assessing their ability to drive. The roads would be empty.

ionizing_chicanery 2026-01-05 08:55

> Even in well-managed system safety lifecycles, which Tesla obviously has zero interest in maintaining, there are a myriad of Human Factors risks - the most notable being that, given enough experience with a system, the test operator begins to "trust" the system. Form a mental symbiosis with it. The test operator naturally becomes complacent. Does not even realize it. Starts subconsciously ignoring potential system failures that should be documented and addressed. This is a big point that autonomous drivibg experts make. A user monitored system that needs critical intervention on average every 10,000 miles is in all likelihood less safe than one that needs critical intervention every 200 miles. Because the safety monitor is not trained for this and is liable to become complacent over time, especially if the system failures are sudden and arbitrary. And of course just because someone drove X miles without intervention (with motivation to achieve as many miles as they could) and didn't get in an accident it doesn't mean that the vehicle was always driving safely. We don't know because no one is analyzing the data for red flags and near misses. I've done aerospace safety critical work with HITL (human in the loop) and let me tell you, depending on the user to make split second corrections as part of hazard prevention would absplutely never pass muster. And this is with highly trained users.

ionizing_chicanery 2026-01-05 09:07

Like what even is Tesla's V&V (verification and validation) for FSD? Do they even have anything formal?

ionizing_chicanery 2026-01-05 09:10

Ah yes a cult of no figure. Do you even know what a cult is? Big "atheism is a religion" energy there.

ionizing_chicanery 2026-01-05 09:16

> Tesla definitely runs their models through a simulated environment and can look for divergence, both expected and otherwise. I wonder to what extent. For years FSD was well known to have pretty reproducible issues with things like railroad crossings. Which makes me think their testing is not that robust or they've just accepted a certain level of failure. Deep world simulation is pretty challenging work.

Engunnear 2026-01-05 10:15

They claim that they do, but the fact that it’s Schrödinger’s Autonomy (it’s “autonomous” until it can’t handle a situation, at which point it becomes a driver-assistance system) tells me that whatever V&V efforts they have are not run by competent people.

Engunnear 2026-01-05 10:20

Those standards are applied to every driver on the road - that’s why personal liability is a thing.

Engunnear 2026-01-05 10:31

I think it’s pretty obvious that Tesla’s entire development process is centered around seeking what they consider to be an optimal fault tolerance. I’ve been saying for over a decade that Tesla does hardly anything that any automotive OEM *can’t* do, but most of Tesla’s “advantages” are things that other OEMs *won’t* do. This absolutely extends to selling a highly risky product to willing fans of the brand who are relatively unlikely to blame the company when something goes wrong.

Ascending_Valley 2026-01-05 13:34

Yup. That’s why I said divergence, changes from one version to another. Creating new training scenarios and outcomes in a simulated world is a different level of problem. Aside from creating the world itself, using a realistic distribution for the training that for the models is crucial. As the training diverges from the real world, calibration and inference reliability become more challenging.

k7u25496 2026-01-05 20:40

*This post was mass deleted and anonymized with [Redact](https://redact.dev/home)* point soft subtract yoke hunt one dam stupendous tender connect

ngolo0101 2026-01-07 22:08

real. the sheer amount of hate for elon(which is definitely understandable) undermines the work that the engineers at tesla are doing. comparing FSD to the boeing 737max mcas debacle is also wrong. mcas could override pilot input whereas fsd allows the driver to take over at any moment. MCAS has killed far more people in a way shorter time frame than FSD has

ngolo0101 2026-01-07 22:13

I'd worry more about drunk fatigued and distracted drivers if i were you

Argon522 2026-01-08 03:45

That...wasn't the point.  The point is complacency kills, and much like how mcas was sold as" the exact same, but not"and pilots were led into complacency and trusted the system until it was too late, Tesla does the exact same thing.  FSD is "almost level 4","it's basically magic!", "I've never had an issue in my quadrillion miles driving!".  Those are statements that make a safety engineer cringe.  Obvious signs the operator is complacent and will let the system slip because "it's always worked before".  It's what makes FSD dangerous, it makes mistakes just far enough between that the operator doesn't expect them, so when they do happen they are not prepared.

wokolie 2026-01-08 09:34

Pilots weren’t even aware that mcas existed. There were no meaningful disclaimers provided by boeing. Whereas tesla officially does although the marketing language used by elon and his fans is very dangerous i agree.

Add comment

Login is required to comment.

Login with Google