One of my biggest criticisms of Elon is that he rarely takes accountability for his words and nobody ever holds him accountable by asking directly. This isn't a high bar to clear, but some people continuously excuse him as being "overly-eager" in his predictions. If that was the case, he could still provide reasonable updates when a predicted date is missed along with an explanation, even if it's just: "Hey, it turns out this problem was much more difficult than we initially expected and it'll take longer". A lot of the problems that he's trying to solve are actually quite difficult, so it's understandable that predictions will be imprecise... But when you realize that your predictions are going to be wrong, you should have the basic decency to update people.
When you're wielding immense amounts of money, power, and influence I think it's worth trying to do the bare-minimum to hold people accountable for their words and claims. Otherwise your words are meaningless.
In America, you fail the second you apologize or take accountability. Ignoring criticism and deflecting all the time gets you further, as it is part of the game. Unfortunately, this is just an accepted social science-y thing at this point. It is a very much cultural thing of the past couple of decades.
Tesla doesn't cultivate an engineering culture. Tesla encourages a culture of lying. Some engineers have become so corrupted by it that they're willing to lie about things that there's no need to lie about, like quarter mile times of the Cybertruck:
Yes, there is. In every place that I've worked, including my current position, acknowledging when you're wrong or have failed at something increases trust in you and your professionalism.
People who always have an excuse, try to shift blame, etc., are assumed to be lacking in competency (let alone ethics and trustworthiness).
My point is less around how engineers behave, and more around how organisations behave.
If an organisation is constantly retrenching experienced staff and cutting corners to increase earnings rather than being driven by engineering first, it doesn't matter what the engineers do amongst themselves. This culture, in fact, rewards engineers doing a bad job.
Not all organizations behave that way, though. If you reword my comment to indicate the company attitudes themselves, it still largely holds true.
I confess to a selection bias, because I won't work at a company that doesn't behave that way. Life is too short for that BS. However, that I maintain employment at the expected pay rates while doing so indicates that there are a lot of companies who don't behave the way you describe.
All that said, I certainly don't deny that there are also a lot of companies who do behave as you describe.
It's too bad, because "I'm sorry; this is my fault" is the biggest diffuser of anger and best way to appease mad customers. Try it sometime; the other party goes from ready to kill you to apologetic themselves (if you're genuine). Unfortunately it's seen as a sign of weakness by people like Elon and his cult of impersonators and an admission of liability by the litigious crowd. If you can be strong, confident and ready to admit it when you're wrong you'll not only be successful in confrontational situations but also not a giant dick.
We once had a customer on a project that we'd messed up. I told the customer I was sorry about that and that I'd make an effort to fix the problem. I could see they were happy to hear that. But afterwards my manager called me at home and got mad I'd said sorry to them. His philosophy was never to apologize. Funny thing, later on that customer offered me a better paid position...
I think the problem is many of our senior leaders are just not that good, and the best they can do is model themselves on who they think is successful, like Musk. Then we get a predetermined outcome that repeats. Remember when every senior leader concluded that "Steve Jobs treated people like shit, but was very successful; therefore the path to success is treat people like garbage."? This was a global phenomenon for years. The "admitted failure is weakness" believe is much stronger.
That's a interesting take. What I have heard from a very old friend of my father is the opposite:
> Knowing when to say thanks and when to say sorry is the key for success.
...and I have used this piece of advice since then, it paid me handsomely. Of course, this doesn't allow you to be shameless, on the contrary, it requires to stick to your values as a prerequisite.
I think what allows Elon to behave like that is how he can retaliate without any repercussions since he has tons of money and influence in some circles.
> One of my biggest criticisms of Elon is that he rarely takes accountability for his words and nobody ever holds him accountable by asking directly. This isn't a high bar to clear, but some people continuously excuse him as being "overly-eager" in his predictions.
I've written off pretty much everything he says since sometime before 2020, too many lies about self-driving to count.
But I'm not someone with any influence (nor do I really want that kind of attention).
> when you realize that your predictions are going to be wrong, you should have the basic decency to update people
Not to get too political, but the last I've heard of Elon Musk is that he was speaking to/mobilizing right wing extremists at a big protest in London. I am also pretty sure he has been trying to do similar things in other European nations (for whatever reason).
It seems to me that it is a bit late to plead for "basic decency" at this moment.
But at the same time let me ask you, what has he got to lose? What financial or reputational risk is he taking by not taking any accountability?
Society needa a "no assholes" policy in order to syay high trust. Elon not being a pariah because of his grifting is a sign the US is becoming a lower and lower trust society. And its billioniares making it so
He lies relentlessly even to customers who paid for the product.
I know because I’m one of them. FSD paid in full almost seven years ago, still does absolutely nothing in Europe. A five-year-old would do a better job at driving because the Tesla can’t even see speed limit signs correctly.
Tesla takes no responsibility for their misleading marketing and years of lies. Most recently Musk promised in early 2025 that these old cars would get a hardware update that will finally enable the paid-for FSD (as if…) The company itself pretends to know nothing about this latest promise made by its CEO.
It’s insane that a business with a trillion-dollar market cap operates like this. It seems to be more of a cult than a real company.
So why take money for it? And computer vision is well past the stage where the information can be read from these signs, theres enough training data. So thats not a real hurdle. If different road geometry or traffic customs/rules is the issue then just admit that FSD cant generalize like a human and is overfit to the us. Why lie and pretend its almost human level?
How do EU regulations prevent my car from recognizing speed limit signs?
The real problem is that Tesla sold this in Europe, but never put the slightest effort towards actually developing the localized ML model that could make it work. (Of course the hardware sucks too, but they could have done a much better job with it.)
I was being a bit factious (related to Elon's previous bullshit claims that it was only a regulatory issue that everybody's Tesla can't moonlight as a robotaxi by the end of some year in the past...sure Elon).
A surprising chunk of HN commenters earnestly believe that "caveat emptor" should be some kind of horrible default way of life. Like businesses should be able to sell anything they want, without regulation, as long as they can convince people to buy those things. And if those things don't work--well it's 100% the customer's fault for being so gullible and not being responsible for the company's quality.
I don't understand why people would want to live that way and argue against those who fight for better.
But my guess is that it's a defense mechanism, essentially the Just World fallacy. "It would really suck to have that bad thing happen to me through no fault of my own. *spoink!* I'm careful (and smart) therefore bad things won't happen to me" (https://dilbert-viewer.herokuapp.com/1999-09-08)
> because the Tesla can’t even see speed limit signs correctly.
This is sad and atrocious. Not only a Ford Puma (an econobox compared to a Tesla) can read almost all speed limit signs correctly, it can pull speed limit data correctly from its onboard maps when there are no signs. These maps can be updated via WiFi or an on board modem too.
Tesla mocked "big auto industry", but that elephant proved that it can run if it needs to.
Interestingly with SpaceX he is much more willing to change plans. With SpaceX he and SpaceX seem to be searching for the right solution.
For self driving, he simply decided X is right and talked about exponentials and no matter how many time it fails, there is no reflection what so ever.
He's a psychopath, in the sense he doesn't feel normal emotions like remorse and empathy. He will lie to your face to get you to buy his product and when it fails to deliver on promises he will lie again.
Well, I have to admit that your friend has a point. Humans are bad at reacting quickly and correctly to unexpected situations, and some debris large enough to damage your car showing up from out of nowhere after several hours of boring driving along a largely straight highway with little traffic is definitely one of these situations. But a self-driving system worth its salt should always be alert, scanning the road ahead, able to identify dangerous debris, and react accordingly. So, different pair of shoes...
I'm not convinced. The debris is clearly visible to the humans a long way off and the adjacent lane is wide open. Avoiding road debris is extremely common even in more congested and treacherous driving conditions. Certainly it's possible that someone texting on their phone or something might miss it, but under normal circumstances it could have been easily avoided.
100% it would have. One of the main things the LiDAR system does is establish a "ground plane", which is the surface on which the car is expected to drive. Any hole or protrusions in that plane stick out like a sore thumb to a LiDAR system, you'll be able to see it in the raw data without much of a feature detector, so detecting them and reacting is fast and reliable.
Contrast with Tesla's "vision-only" system, which uses binocular disparity along with AI to detect obstacles, including the ground plane. It doesn't have as good a range, so with a low- profile object like this it probably didn't even see it before it was too late. Which seems to me a theme for Tesla autonomy.
In addition to detecting the object, Waymo has to make some determination about the material. Rigid heavy metal = slam on the brakes and/or swerve. Piece of tire or plastic bag = OK to run over if swerving or hitting the brakes would be more dangerous. Really hard problem that they're concerned about getting right before they open up highway driving.
LiDAR is also good for that because you can measure light remission and figure out how much of the LiDAR energy the material absorbed. Different materials have different remission properties which can be used to discriminate. Which is a compounding advantage because we tend to paint road line markers with highly reflective paints. This makes line markers blindly obvious to a LIDAR.
> and we should not tolerate self-driving systems that are as good as the worst of us
The person you replied to didn't do that, though:
> But a self-driving system worth its salt should always be alert, scanning the road ahead, able to identify dangerous debris, and react accordingly. So, different pair of shoes...
I think they meant the person you were responding to never claimed that the person they were responding to said that we should tolerate self driving systems that are no better than the worse of us, not that the person that the person you were responding to was responding to never said the thing you very clearly directly quoted.
I think you might have misunderstood someone here. The person you quoted made a generic statement about what we should expect from an autonomous vehicle, but never said (nor implied imho) that the person he responds to didn't expect the same.
I've been driving on local and highway roads for 30 years now and I have never come across a piece of debris so large that driving over it would damage my car. Seeing that video, I don't have high confidence that I would have dodged that hazard - maybe 70% sure? The thing is, usually there is plenty of traffic ahead that acts very obviously different in situations like this that helps as well.
All that to say that I don't feel this is a fair criticism of the FSD system.
> I have never come across a piece of debris so large that driving over it would damage my car
More likely you simply drove around the debris and didn't register the memory because it's extremely unlikely that you've never encountered dangerous road debris in 30 years of driving.
I think it's probably because of mostly driving in enough traffic that other cars would have encountered any critical objects first and created a traffic jam around an impassable section.
Unless you're on your phone, with that clear of a view and that much space, 100% you would dodge that, especially in a sedan where your clearance is lower than a truck.
No way. I call in road debris on the freeway once every couple of months. People swerve around it and if it’s congested, people swerving around it create a significant hazard.
Honestly no, not in the middle of the road, but plenty on the side. The only things I come across in the middle of the roads are paper bags or cardboard for some reason.
But also, I doubt you would break your swaybar running over some retreads
Driving I-5 up to Portland I had to dodge a dresser that was standing upright somehow in the middle of the lane. The truck in front of me moved into the other lane revealing that thing just standing there, I had to quickly make an adjustment similar to what this tesla should have done. Teslas also have lower bellys, my jeep would have gone over the debris in the video no problem.
> All that to say that I don't feel this is a fair criticism of the FSD system.
Yes it is because the bar isn't whether a human would detect it, but whether a car with LiDAR would. And without a doubt it would, especially given those conditions: clear day, flat surface, protruding object is a best case scenario for LiDAR. Tesla's FSD was designed by Musk who is not an engineer nor an expert in sensors or robotics, and therefore fails predictably in ways that other systems designed by competent engineers do not.
I don't disagree with that characterization of the technical details. However I felt the task those drivers set out was asking a different question: how good would the FSD system be at completing a coast-to-coast trip? I don't think this can be answered after hitting a singular, highly unlikely accident without a lot more trials.
Imagine there was a human driver team shadowing the Tesla, and say they got T-boned after 60 miles. Would we claim that human drivers suck and have the same level of criticism? I don't think that would be fair either.
If you don't disagree on the characterization of the technical details, then you must realize how very fair it is for us to criticize the system for failing in the exact way it's predicted to fail. We don't need 1000 more trials to know that the system is technically flawed.
What if there is no debris the other 999 times, and the system works fine? The video does not give me that information as a prospective Tesla customer. This looks like a fluke to me.
Those 999 other times, the system might work fine for the first 60 miles.
This is a cross-country trip. LA to New York is 2776 miles without charging. It crashed the first time in the first 2% of the journey. And not a small intervention or accident either.
How you could possibly see this as anything other than FSD being a total failure is beyond me.
>asking a different question: how good would the FSD system be at completing a coast-to-coast trip?
>They made it about 2.5% of the planned trip on Tesla FSD v13.9 before crashing the vehicle.
This really does need to be considered preliminary data based on only one trial.
And so far that's 2.5% as good as you would need to make it one way, one time.
Or 1.25% as good as you need to make it there & back.
People will just have to wait and see how it goes if they do anything to try and bring the average up.
That's about 100:1 odds against getting there & back.
One time.
Don't think I would want to be the second one to try it.
If somebody does take the risk and makes it without any human assistance though, maybe they (or the car) deserve a ticker-tape parade when they get there like Chas Lindbergh :)
It does look like lower performance than a first-time driving student.
I really couldn't justify 1000:1 with such "sparse" data, but I do get the idea that these are some non-linear probabilities of making it back in one piece.
It seems like it could easily be 1,000,000:1 and the data would look no different at this point.
> I take it that you're saying that if we provide the Tesla a road which contains nothing to hit, it won't hit anything?
Not quite. I am saying that basing the judgment on a rare anomaly is a bit premature. It's a sample size of 1, but I base this on my own driving record of 30 years and much more than 3000 miles where I never encountered an obstacle like this on a highway.
> Also, not interesting
I would have liked to see the planned cross-country trip completed; I think that would've provided more realistic information about how this car handles with FSD. The scenario of when there is a damn couch or half an engine on the highway is what's not interesting to me, because it is just so rare. Seeing regular traffic, merges, orange cones, construction zones, etc. etc. now that would have been interesting.
Here's an edge case I'm sure everybody has seen very rarely, but that's still not as uncommon as you think. Watch the video by Martinez if the top video is not correct:
Now 2018 might have been a record year, but there have been a number of others since then.
Fortunately for us all, drivers don't have to go through Houston to get from CA to NY, but you're likely to encounter unique regional obstacles the further you go from where everything is pre-memorized.
As we know 18-wheelers are routinely going between Houston and Dallas most of the way autonomously, and a couple weeks ago I was walking down Main and right at one of the traffic lights was one of the Waymos, who are diligently memorizing the downtown area right now.
I'll give Tesla the benefit of the doubt, but they are not yet in the same league as some other companies.
Tesla in 2016: "Our goal is, and I feel pretty good about this goal, that we'll be able to do a demonstration drive of full autonomy all the way from LA to New York, from home in LA to let’s say dropping you off in Time Square in New York, and then having the car go park itself, by the end of next year" he said on a press call today. "Without the need for a single touch, including the charger."
Roboticists in 2016: "Tesla's sensor technology is not capable of this."
Tesla in 2025: coast-to-coast FSD crashes after 2% of the journey
Roboticists in 2025: "See? We said this would happen."
The reason the robot crashed doesn't come down to "it was just unlucky". The reason it crashed is because it's not sufficiently equipped for the journey. You can run it 999 more times, that will not change. If it's not a thing in the road, it's a tractor trailer crossing the road at the wrong time of day, or some other failure mode that would have been avoided if Musk were not so dogmatic about vision-only sensors.
> The video does not give me that information as a prospective Tesla customer.
If you think it's just a fluke, consider this tweet by the person who is directing Tesla's sensor strategy:
Before you put your life in the hands of Tesla autonomy, understand that everything he says in that tweet is 100% wrong. The CEO and part-time pretend engineer removed RADAR thinking he was increasing safety, when really he has no working knowledge of sensor fusion or autonomy, and he ended up making the system less safe. Leading to predictable jury decisions such as the recent one: "Tesla found partly to blame for fatal Autopilot crash" (https://www.bbc.com/news/articles/c93dqpkwx4xo)
So maybe you don't have enough information to put your life in the hands of one of these death traps, but controls and sensors engineers know better.
> Well, I have to admit that your friend has a point. Humans are bad at reacting quickly and correctly to unexpected situations, and some debris large enough to damage your car showing up from out of nowhere
I read this comment before seeing the video and thought maybe the debris flies in with the wind and falls on the road a second before impact, or something like that.
But no, here we have bright daylight, perfect visibility, the debris is sitting there on the road visible from very far away, the person in the car doing commentary sees it with plenty of time to leisurely avoid it (had he been driving).
Nothing unexpected showed up out of nowhere, it was sitting right there all along. No quick reaction needed, there was plenty of time to switch lanes. And yet Tesla managed to hit it, against all odds! Wow.
My impression of Tesla's self driving is not very high, but this shows it's actually far worse than I thought.
Literally the passenger saw it and leaned it, the driver grabbed the steering wheel to brace himself it seems. That object on the road was massive, absolutely huge as far as on road obstacles go. The camera does not do any justice - it looks like it's 3 feet long, over a foot wide, and about 6 or 7 inches high laying on the road. Unless a human driver really isn't paying attention, they're not hitting that thing.
> Well, I have to admit that your friend has a point. Humans are bad at reacting quickly and correctly to unexpected situations,
This was not one of those situations.
and some debris large enough to damage your car showing up from out of nowhere after several hours of boring driving along a largely straight highway with little traffic is definitely one of these situations.
Again, this was definitely not one of those situations. It was large, it was in their lane, and they were even yapping about it for 10 seconds.
> But a self-driving system worth its salt should always be alert, scanning the road ahead, able to identify dangerous debris, and react accordingly. So, different pair of shoes...
This is what humans already (and if we didn't do it, we'd be driving off the road). Based on what you're saying, I question that you're familiar with driving a car, or at least driving on a highway between cities.
Do they? "Many humans" would hit that? The humans in the car spotted the debris at least 8s before the impact. I don't think any humans would hit that in broad daylight unless they were asleep, completely drunk, or somehow managed to not look at the road for a full 10s. These are the worst drivers, and there aren't that many because the punishment can go up to criminal charges.
The argument that "a human would have made that mistake" backfires, showing that every Tesla equipped with the "safer than a human driver" FSD is in fact at best at "worst human driver" level. But if we still like the "humans also..." argument, then the FSD should face the same punishment a human would in these situations and have its rights to drive any car revoked.
Or they would hit it if they were busy fiddling with the car's autodrive system. These humans would have avoided it had they not wasted time speculating about whether the autodrive system would save them. They would have been safer in literally any other car that didnt have an autodrive.
Bullshit. Some humans might hit thay because they werent paying attention, but most people would see that, slow down and change lanes. This is a relatively scenario that humans deal with. Even the passenger here saw it in time. The driver was relying on FSD and missed it
I dont think FSD has the intelligence to navigate this
When the self-driving car killed a pedestrian several years ago, the initial sentiment on this site for the first few hours was essentially "those dastardly pedestrians, darting into traffic at the last second, how are you supposed to avoid them?" It took several hours for enough information to percolate through to make people realize that the pedestrian had been slowly and quite visibly crossing the road and the self-driving car (nor the safety driver) never did a thing to react to it.
Another thing to keep in mind is that video footage is much lower quality than what we can see with our human eyeballs. At no point in the video can I clearly identify what the debris is, but it's clearly evident that the humans in the car can, because they're clearly reacting to it seconds before it's even visible to us in the dash-cam-quality footage. I will freely accept that many drivers are in fact bad drivers, but a carcass (I think?) on a lane visible for >10 seconds away is something that anyone who can't avoid needs to have their license revoked.
(Assuming I know which accident you're referring to) The car that killed the pedestrian in Florida wasn't using supervised full self driving, he was using autopilot (which was basically adaptive cruise control at the time).
I don't love Tesla (though I would like an electric car). I don't think it's unlikely that someone driving could have hit that or caused an even worse accident trying to avoid it.
However, I'm sure a human driver in the loop would have reacted. The driver sitting there watching a machine do it's thing 99% of the time and being expected to step in to save that situation though is a horrific misreading of human nature.
Those humans saw the debris. What happens next when a human is actively at the wheel is that the driver should look at all mirrors, decide whether to change lane or brake, execute. Or anything else that could lead to a movie like multiple car accident. Hitting the debris is the least dangerous line of conduct if there are cars all around. That looked like an empty road but who knows.
By the way, playing a lot of racing videogames is a great training for dealing with that sort of stuff, except maybe getting good at mirrors. I've been in a few dangerous situations and they were only the 10th thousand averted crash. Nothing to think, only reflexes.
I highly recommend people take a high performance/race driving course if they can. I did a single day one which involved high speed maneuverability trials designed to be useful in emergency scenarios (swerving, braking, hard turns) followed by a few laps around a racetrack.
It's one of the best ways to figure out what it feels like to drive at the limits of your car and how you and it react in a very safe and controlled environment.
Did you friend make any mention that the passenger saw it hundreds of feet away and even leaned in as they headed directly towards it? The driver also recognized it and grabbed the wheel as if to say "brace for impact!".
Obviously, in this particular case the humans wouldn't be hitting that. The people in the video have clearly seen the object, but they didn't want to react because that would have ruined their video.
Even if they did not understand what it is, in the real world when you see something on the road you slow down or do a maneuver to avoid it, no matter if it is a harmless peace of cloth or something dangerous like this. People are very good at telling if something is off, you can see it in the video.
> A friend of mine who loves Tesla watched this video and said "many humans would have hit that".
The very same video demonstrates this is not true, since the human in the video sees the debris from far away and talks about it, as the self-driving Tesla continues obliviously towards it. Had that same human been driving, it would've been a non-issue to switch to the adjacent lane (completely empty).
But as you said, the friend loves Tesla. The fanboys will always have an excuse, even if the same video contradicts that.
Question - isn't P(Hitting | Human Driving) still less than P(Hitting | Tesla FSD) in this particular case [given that if this particular situation comes up - Tesla will fail always whereas some / many humans would not]?
Clearly not, because they hit it and didn't even touch the wheel until they were basically on it. It can be hard to tell not just when something is there, but whether it's thick enough to warrant changing lanes
The question is if avoiding the obstacle or breaking was the safest thing to do. I did not watch the entire test, but they are definitely cases where a human will suddenly break or change lanes and cause a very unsafe condition for other drivers. Not saying that was the case here, but sometimes what a human would do is not a good rule for what the autonomous system should do.
An enormous part of safe driving is maintaining a mental map of the vehicles around you and what your options are if you need to make sudden changes. If you are not able to react to changing conditions without being unsafe, you are driving unsafely already.
Yes. Humans would. Which is why the car should be able to handle the impact. My honda civic has had worse without issue. The suspension should be beefy enough to absorb the impact with, at worst, a blown tire. That the car has broken suspension says to me that teslas are still too fragile, biuld more like performance cars than everyday drivers.
With millions of Teslas on the road one would think if that was true we would have heard something by now. My absolute worst car quality wise ever was a Honda Accord. And I owned shitty cars including a Fiat. My most reliable car was a Honda Civic before I “upgraded” to a brand new Accord. I abuse my Tesla and so far no issues driving in one of the worst roads in the country. I must hit 100 potholes per month and blew a tire already. It’s not a fun car to drive like a GTI (which I own as well) but it’s definitely a solid car.
Cars with "bad" suspension tend to survive potholes. A car with slow-to-move suspension will see the wheel dip less down into the hole when traveling at speed. But that is the exact opposite behabior you want when dealing with debris, which requires the supension to move up rather than down. "Good" systems will have different responce curves for up than down. Quazi-luxury cars fake this by having slow suspension in both directions, to give the sense of "floating over potholes".
> That a human was still in the loop in addition to a computer and both missed it.
Listen to the audio in the video. The humans do see it and talk about it for a long time before the car hits it. Had a human been driving, plenty of time to avoid it without any rush.
They do nothing to avoid it presumably because the whole point of the experiment was to let the car drive, so they let it drive to see what happens. Turns out Tesla can't see large static objects in clear daylight, so it drives straight into it.
That's laughable. Any human who couldn't avoid a large, clearly-visible object in the middle of an empty, well-lit road should not be operating a vehicle.
That's not to say that there aren't many drivers who shouldn't be driving, so both can be true at once, but this is certainly not a bar against which to gauge autonomous driving.
I would counterpoint my little cheap Civic has hit things like that and hasn't broken a thing. HEH.
The best time was a very badly designed speed bump that I didn't even hit at high speed but one side was ridiculously inclinded vs. the other so the entire civic's frontend just smashed right into the pavement and dragged for 3 feet. I wouldn't be surprised if I went into a body shop and find the frontend tilted upwards by a few centimeters. lol
Timestamp 8:00-8:30. Your Civic is not hitting that and surviving any better than the Tesla. It just got luckier. It may be easier to get lucky in certain vehicles, but still luck based.
Yes many human drivers would hit it. The bad ones. But we should want driverless cars to be better than bad drivers. Personally, I expect driverless cars to be better than good drivers. And no, good drivers would not hit that thing.
Anecdotal: I am surprised how the basic Tesla autopilot often cannot even read the speed limit signs correctly. In perfect lighting conditions. It just misses a lot of them. And it does not understand the traffic rules enough to know when the speed limit ends.
I know that the basic autopilot is a completely different system than the so-called FSD.
But equipped with that experience from the basic autopilot, it does not surprise me that a large debris on the road was completely missed by the FSD.
In FSD, there's an annoying bug where Georgia minimum speed signs are misinterpreted as speed limit signs. It's because most states that do minimum speed signage combine it with the speed limit into a single sign, but Georgia has separate, freestanding minimum speed signs. Thankfully, recent versions of FSD don't immediately start slowing down anymore when a misinterpreted sign makes the car think that the speed limit on a highway is 40; but the sign comprehension bug has remained unresolved for years.
I use autopilot for local driving (city - suburbs) and I pay for FSD when on long road trips (>300 miles). You are correct, they are completely different things so one doesn’t correlate to the other one.
That they are different things is really disappointing. If you want people to trust the system enough to buy FSD, the autopilot mode should use the same system, with limited functions. There is no reason why the vision/detection systems should be different. Especially if you already have the proper hardware installed…
Tangentially - If you as a European happen to drive on US highways, you will noticed that they are heavily littered with fallen cargo, aluminum ladders, huge amount of shredded tires and occasionally a trailer without a towing car... It has been so bizarre for me to observe this. Nobody is cleaning that?
I just got back from a trip to the USA where I spent about five days driving around Michigan, Illinois, and Wisconsin and the number of shredded truck tires on the highways was flabbergasting.
From what I understand, in the USA when a truck tire wears down they put a new layer of tread on it. But it doesn't seem to work very well as it eventually peels off.
Those are called retreads and they are not uncommon worldwide. If you're seeing anything other than long thin strips of tread on the road it's not a retread related failure.
Every now and then the Karens get to screeching about it and it reaches a critical mass and the NHTSA does a study and they find that most of the debris you're seeing has nothing to do with retreads. Here's the summary of a recent one:
Retreading is legal but only specified companyes with its own E mark legally can do it. And combine that with more in depth 12 or 6 months inspection for > 3.5t trucks it usually means that the tires are in better condition.
I hit a retread as it became detached from the lorry I was following on the M25, UK. Scary moment, similar to the video in TFA, + an expensive repair job.
I'm from Norway, and visited a buddy in Florida back in the early 2000s. It was my first time to the US.
I recall I was completely flabbergasted by all the cars just ditched along the highway. There were lots of them, just off the road into the grass on the side or whatever.
I asked my buddy about it and he said it was usually tires, as it was cheaper to buy another car than get new tires... Well that didn't help my blown mind one bit.
Mind you, on the way to his house I passed a Kia dealer which literally had a huge "buy one car - get one free" sign outside...
When I was a boy in Florida in the 1970s, there was an annual inspection for automobiles. Some other states still do this. It would certainly improve overall safety on the roads if we still had any minimum requirements.
In the linked video highway patrol comes out to remove the debris just a few minutes after they hit it. (highway patrol had been called out prior them hitting it)
It's been years since I've seen anything you couldn't drive over that wasn't in the travel lane.
Around here anything big enough to matter gets reported, the cops fling it to the side of the road and it gets picked up on a schedule because they always pick things up a few weeks before they mow (presumably because hitting garbage isn't great for the mowers).
It depends where in the US you drive. It’s a big country with independent state governments. It’s like saying I was driving in Romania and I was shocked by how bad European highways are. I lived in Texas and the stuff I saw on the highway was crazy, vacuum cleaners, decorated Christmas trees and refrigerators. Most parts of the country interstate and highway systems are pretty clean.
When it comes to road conditions it's often comical how different roads can be just across state or county lines. From almost completely destroyed roads barely holding together to a perfectly clean and flat asphalt road. Where I have noticed more trash off highways is rural towns or dense urban areas that run right alongside highways. I definitely noticed more trash in the south (Texas/Oklahoma/Louisiana) than north (Iowa, Minnesota, Michigan).
Oh but we do. Most of the state-owned motorways have been sold off to conglomerates about two decades ago, dividends and exec comps have to come from somewhere.
in germany trucks pay tolls for highways. Also all the gas is taxed and it goes into the federal budget that is then used to finance highways, so i would say everybody filling up is financing highways.
You don't need to drive far, just get on a US highway and there is dangerous litter every few hundred meters. In extreme cases it goes down to few dozens meters. Sometimes it was like driving in some Mad Max movie.
There usually nominally is, and they may do some of the work, but overfunding law enforcement and then having them cover work that should have been done by other agencies that whose funding has been limited by the drive to stuff more money into LE is a pretty common pattern for US state and local governments.
But also the dominance of car culture in the US is such that the principal state police force may actually be principally designed as a component of the highway traffic authority, as is the cass in California where the Highway Patrol was always larger than and in 1995 absorbed the State Police.
There is in most states. There are often signs periodically telling you what phone number to call to report such hazards. If possible, it's better to call that number than 911, but if you don't know the number, 911 will do. They'll just forward your report to that agency anyway.
A ladder in the road is a legitimate emergency. If you call 911 to report a ladder in the road, they will direct the issue to the relevant agency (which will be the state police, in all likelihood, because they will need to close a lane).
I'm sorry. Could you repeat that? I couldn't make out what you said clearly?
All I heard was "taxes".
/s
On a more serious note, in the US we generally go in the direction of fewer services rather than more services. It, of course, leads to massive inefficiencies, like police removing shredded tires. But it's very difficult here, politically speaking, to get Americans to agree to add government services.
Fuel tax is a big factor, but US has a lot of road. The US has 3x the amount of paved surface vs. Germany. European winters are also milder than the US. I'm not sure how many European roads would survive going from -10 to 140 like they do in the midwest
Actually, that's surprisingly little roads in the US. The population is significantly larger (more than 3x), so on a per person basis roads in Germany see much more use than in the US. And roads there are in a much better state than in the use despite the higher usage rate.
So also as a consequence of this: If the US were to use the same per person $ amount to upkeep their roads, US roads would have WAY more money to be maintained. Yet, the outcome is obviously worse.
Also more VMT, which would tend to balance the excess roads, because more driving leads to more fuel tax revenue. USA has more than double the VMT per capita of Germany. If the fuel tax was appropriately set, then this factor would compensate for the greater sizes of the roads.
Not just that but also tolls. There are way more toll roads, at least where I've lived in Europe, compared to where I've lived in the US (Spain being the one very noticeable exception between France and Portugal).
> That's kind of irrelevant, this technology is meant to be safer and held to a higher standard
I don't think that is the case. We will judge FSD whether you make more or less accidents than humans, not necessarily in the same situations. The computer is allowed to make mistakes that a human wouldn't, if in reverse the computer makes a lot less mistakes in situations where humans would.
Given that >90% of accidents are easily avoidable (speeding, not keeping enough safety distance, drunk/tired driving, distraction due to smartphone usages) I think we will see FSD be safer on average very quickly.
That's the main advantage self-driving has over humans now.
A self-driving car of today still underperforms the top of the line human driver - but it sure outperforms the "0.1% worst case": the dumbest most inebriated sleep deprived and distracted reckless driver that's responsible for the vast majority of severe road accidents.
Statistics show it plain and clear: self-driving cars already get into less accidents than humans, and the accidents they get into are much less severe too. Their performance is consistently mediocre. Being unable to drink and drive is a big part of where their safety edge comes from.
The statistics on this are much less clear than Tesla would like us to believe. There's a lot of confounding factors, among them the fact that the autonomous driver can decide to hand over things to a human the moment things get hairy. The subsequent crash then gets credited to human error.
> To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed.
NHSTA's reporting rules are even more conservative:
> Level 2 ADAS: Entities named in the General Order must report a crash if Level 2 ADAS was in use at any time within 30 seconds of the crash and the crash involved a vulnerable road user being struck or resulted in a fatality, an air bag deployment, or any individual being transported to a hospital for medical treatment.
At highway speeds, "30 seconds" is just shy of an eternity.
Tesla doesn't report crashes that aren't automatically uploaded by the computer. NHTSA has complained about this before. Quoting one of the investigations:
Gaps in Tesla’s telematic data create uncertainty regarding the actual rate at which vehicles operating with Autopilot engaged are involved in crashes. Tesla is not aware of every crash involving Autopilot even for severe crashes because of gaps in telematic reporting. Tesla receives telematic data from its vehicles, when appropriate cellular connectivity exists and the antenna is not damaged during a crash, that support both crash notification and aggregation of fleet vehicle mileage. Tesla largely receives data for crashes only with pyrotechnic [airbag] deployment, which are a minority of police reported crashes. A review of NHTSA’s 2021 FARS and Crash Report Sampling System (CRSS) finds that only 18 percent of police-reported crashes include airbag deployments.
Even if this were the case, it would still skew the statistics in favor of Tesla: The Autopilot gets to hand of the complicated and dangerous driving conditions to the human who then needs to deal with them. The human to the opposite cannot do the same - they need to deal with all hard situations as they come, with no fallback to hand off to.
> The computer is allowed to make mistakes that a human wouldn't, if in reverse the computer makes a lot less mistakes in situations where humans would.
This subverts all of the accumulated experience of other users on the road about what a car will do, everyone is used to potential issues caused by humans, on top of that other road users will have to learn the quirks of FSD to keep an eye for abnormalities in behaviour?
That's just unrealistic, not only people will have to deal with what other drivers can throw at them (e.g.: veering off lane due to inattention) but also be careful around Teslas which can phantom brake out of nowhere, not avoid debris (shooting it on unpredictable paths), etc.
I don't think we should accept new failure modes on the road for FSD, requiring everyone else to learn them to be on alert, it's just a lot more cognitive load...
> That's kind of irrelevant, this technology is meant to be safer and held to a higher standard.
True, but not even relevant to this specific example. Since the humans clearly saw it and would not have hit it, so we have a very clear example where Tesla is far inferior to humans.
Indeed... You can see the driver reaching for the wheel, presumably he saw it coming, and would have hit the breaks. He left the car to do its thing thinking it knows better than him... maybe.
Personally if the road was empty as here, I'd have steered around it.
This was really best possible driving conditions - bright day, straight dry road, no other cars around, and still it either failed to see it, or chose to run over it rather than steering around it or stopping. Of all the random things that could happen on the road, encountering a bit of debris under ideal driving conditions seems like it should be the sort of thing it would handle better.
And yet Tesla is rolling out robo taxis with issues like this still present.
[Insert red herring about MCAS and cop out about how redundancy is "hard" and "bad" "complexity".]
Have a minimum quorum of sensors, disable one if it generates impossible values (while deciding carefully what is and isn't possible), use sensors that are much more durable, reliable, and can be self-tested, and integration and subsystem test test test thoroughly some more.
I've heard stories of plastic bags on the highway making their way into the path of the front-facing cameras of vehicles. Resulting in automatic emergency braking at highway speeds.
Human eyes are better by most metrics than any camera, and certainly any camera which costs less than a car. Also, obviously, our visual processing is, by most metrics, so much better than the best CV (never mind the sort of CV that can run realtime in a car) that it's not even funny.
they're making fun of Tesla, which stopped putting radar (ed: I misremembered, thanks to the commenter below) in their cars during the pandemic when it got expensive and instead of saying "we can't afford it", claimed it's actually better to not have lidar and just rely on cameras
Yeah! Just add more sensors! We're only 992 more sensors away from full self-driving! It totally works that way!
The debris? The very visible piece of debris? The piece of debris that a third party camera inside the car did in fact see? Adding 2 radars and 5 LIDARs would totally solve that!
For fuck's sake, I am tired of this worn out argument. The bottleneck of self-driving isn't sensors. It was never sensors. The bottleneck of self-drivng always was, and still is: AI.
Every time a self-driving car crashes due to a self-driving fault, you pull the blackbox, and what do you see? The sensors received all the data they needed to make the right call. The system had enough time to make the right call. The system did not make the right call. The issue is always AI.
It’s a lot easier to make an AI that highly reliably identifies dangerous road debris if it can see the appearance and the 3D shape of it. There’s a fair bit of debris out there that just looks really weird because it’s the mangled and broken version of something else. There are a lot of ways to mangle and break things, so the training data is sparser than you’d ideally like.
You want the AI to take the camera's uncertainty about a road-colored object and do an emergency maneuver? You don't want to instead add a camera that sees metal and concrete like night and day?
We don't have a bottleneck anymore. We have Waymo. They seemed to have solved whatever the issue was...I wonder what the main difference between the Waymo system and the Tesla system is?
The "main difference" is that Waymo wouldn't even try to drive coast to coast.
Because it's geofenced to shit. Restricted entirely to a few select, fully pre-mapped areas. They only recently started trying to add more freeways to the fence.
You're right, they wouldn't try, but I don't think there's any evidence for the idea that Waymo couldn't pull this trip off now from a technical POV. Even if they're pre-mapping, the vehicles still have to react to what's actually around them.
Respect to these guys for commiting to the bit and letting the Tesla hit it. This is real journalism. Stark contrast to so much of the staged engagement-bait on YouTube.
I noticed that, too. They seemed to tense up and notice it with enough time to spare to manually intervene and either slam on the brakes or swerve to avoid it.
That was my hunch, but Google Lens was able to ID it. Possible that Waymo vehicles can do this too, but that must take some serious compute and optimization to do at highway speeds.
Thanks for the direct link. That accident could've been so much worse if the ramp had caught an edge under the car. It could easily have flipped it. They were really taking a risk letting the car run over it.
Two more years guys, just give him two more years, a few more billion, a bit more political power and I promise he'll give you your fancy self driving toy. (Repeat from 2012 ad infinitum)
> Two more years guys, just give him two more years, a few more billion, a bit more political power and I promise he'll give you your fancy self driving toy. (Repeat from 2012 ad infinitum)
The trillion dollar pay package will make it happen, that's what was missing.
the problem is that TSLA (the company) is pivoting hard to humanoid robots, which is a relatively easier problem, perfectly big market and Musk is a terrific salesguy. Medium/long term, humanoid robots are commodity but so were electric cars and TSLA (the stock) rode that to glory.
Eh? _Useful_ humanoid robots are if anything considerably harder. Tech demos are easier, granted; humanoid robot tech demos date back to the 80s or so.
If it's not your money does it matter? I don't think it's fair to say he doesn't deliver anything e.g. he did cars, rockets, and now AI (the rate X are building out training capacity is genuinely astonishing) at the same time.
I think the point is that if it's his money he's pissing away, then any other projects the money would have been spent on would have been equally dubious in any case. He's not going to, all of a sudden, become wise simply because he doesn't spend money on what he's spending money on.
Did we give him wayyy too much free money via subsidies? Yes. But that was our mistake. And if we hadn't given it to him, we would have given it to some other scam artists somewhere or other. So even in the case of the counterfactual, we could expect a similar outcome. Just different scumbags.
idk man, I'm not in the matrix deep enough to give a shit about "building out training capacity" of LLMs, I think there are way more important topics like not destroying the fabric of our society and political systems, but idk, I guess I'm just a far left antifa maniac terrorist or something
Easy problem to solve. We just need to train an AI image classifier with highly annotated images of every single possible combination of road debris in every single country in the entire world. Shouldn't take longer than a month, right team? /s
It seems well documented that the Tesla system is at level 2.
and it requires "hands on supervision".
Has Elon lied about the capabilities?
Yes, on many occasions.
Crashing your car to prove it seems lilke a waste.
When the documentation is clear.
"""
Tesla Autopilot is an advanced driver-assistance system (ADAS) developed by Tesla, Inc. that provides partial vehicle automation, corresponding to Level 2 automation as defined by SAE International
"""
https://en.wikipedia.org/wiki/Tesla_Autopilot
"""
Drive Autonomously: No manufacturer has achieved Level 4 or Level 5 autonomy for road use. Tesla has not achieved Level 3 conditional autonomy like Mercedes-Benz’s DrivePilot system. A Tesla cannot drive itself. The driver must remain attentive at all times. Tesla now qualifies Autopilot and Full Self-Driving with a (Supervised) parenthetical on its website.
Tesla’s Level 2 system is a hands-on system that requires the driver to regain control immediately. Torque sensors confirm the driver’s hands are on the wheel or yoke.
Level 2 Driving on City Streets: Tesla does list Autosteer on City Streets as a feature of Full-Self Driving. But notably, its website provides no further
"""
https://insideevs.com/news/742295/tesla-autopilot-abilities-...
"""
In a statement addressing the US recall, Tesla declared its technology is a ‘Level Two’ semi-autonomous driving system – not the more advanced ‘Level Three’ system which is already being developed and rolled out by rival car-makers.
"""
https://www.drive.com.au/news/tesla-full-self-driving-level-...
No manufacturer has achieved Level 4 or Level 5 autonomy for road use.
Waymo has achieved Level 4, with hundreds of thousands of paid rides per week and a stellar safety record. But they're not technically a manufacturer I guess.
I do not understand the official position of Tesla on FSD (Supervised).
The owner's manual Tesla has posted on www.tesla.com says, "Full Self-Driving (Supervised) is a hands-on feature that requires you to pay attention to the road at all times. Keep your hands on the steering wheel at all times..." [0]
The owners of Tesla vehicles are saying that they can just relax and not keep hands on the wheel.
Is Tesla maintaining a more conservative stance while encouraging drivers to do dumb things? Or is the owner's manual just out of date?
I got my first Tesla last year and my first trip with FSD was just after the launch of V13, so I could not compare it to earlier versions. But I was shocked by how good it was. I completed a 800 miles trip with a handful of interventions, most or all of them likely unnecessary. Even with rain the system worked well. I don’t understand how the system was not able to see this obstacle and slow down or change lanes. Did they have traffic tailgating? I can see some edge cases where there’s really no way to avoid something like this safely. In any case it’s pretty unfortunate and it will make me be even more cautious when using FSD.
It's because anecdotal experience means very little. In ~2021, these vehicles could almost never make it from A to B without a safety-critical intervention.
That they can today doesn't mean they can do the same route 10,000 times without a safety-critical intervention. In fact, public tracking indicates Tesla's numbers are MUCH lower than that.
No. If you continue to watch the video after the collision, it's clear that there was no traffic tailgating. They even slowed down and pulled over to the side of the road. No other cars.
I'm unsure why having traffic tailgating even factors into it. If you have to hit the brakes to avoid a collision in front of you, it's not your responsibility to deal with traffic behind you that wasn't at a safe distance; that's all on them.
The point is "If you *have* to hit the brakes to avoid a collision in front of you". Obviously if you have a tailgating idiot behind and you can avoid the collision in front without forcing them to rear-end you, then you do that; but that's not always possible, and in that case the blame is then entirely on the person who hit you from the rear, and *not* on you for rear-ending someone.
Also even if you don’t want to get rear ended. You can at least do moderate braking and hit the debris at a slower speed. Humans make these sorts of split second tradeoffs all the time on the highway.
> Obviously if you have a tailgating idiot behind and you can avoid the collision in front without forcing them to rear-end you
That kind of consideration is where having traffic tailgating factors into it. A risk of collision (or a minor collision) may be a better option than a certain (or worse) collision with the tailgating idiot.
When they first released V13, the highway stack was still the old C++ code. It wasn't until another four or five months that they switched it to a neural network. It doesn't seem like they've put much focus on it since then.
Tesla FSD still uses C++. What you're referring to is control policy code that was deleted. FSD also doesn't have a separate "highway stack" and hasn't since v12.
Even with traffic tailgating it would need to brake or go to then right. Your comment sounds like a Tesla ad written by someone from Tesla or a heavy, heavy fan boy.
If you watch the video, the human non-drivers in the car noticed the road debris and could have reacted, but seemed not to because they were waiting for the computer to respond first.
Heh, just yesterday I drove past some roadkill and wondered what a self-driving car would make of that. Then I started wondering how it would handle deer crossing the road 50 years ahead, or running alongside the vehicle as they sometimes do when they get spooked. Or how it would handle the way people drive down the center of gravel roads until they meet someone, and then move over.
At 56, I don't expect to see it on country roads in my lifetime, but maybe someday they'll get there.
> It’s an accident that could also happen to an inattentive human driver. (Ask me how I know :( )
Anything "could" happen, but it would take an inordinately inattentive driver to be this bad.
They had 7-8 seconds of staring and talking about the debris before hitting it (or perhaps more, the video starts the moment the guy says "we got eh, a something", but possibly he saw it some instants before that).
So a person would need to be pretty much passed out to not see something with so much time to react.
At 14:40 in the video they show the footage to a cop who's immediate reaction is
"To be honest with you. {talked over} you decide to change lane. How come you, did you think to change lanes or you just kinda froze up in that moment?"
He clearly thought they had plenty of time to react to it.
I thought the title meant the software crashed, but it was the car that crashed: "They didn’t make it out of California without crashing into easily avoidable road debris that badly damaged the Tesla Model Y ... In the video, you can see that the driver doesn’t have his hands on the steering wheel. The passenger spots the debris way ahead of time. There was plenty of time to react, but the driver didn’t get his hands on the steering wheel until the last second."
And if the car was fully self driving and capable of seeing debris that was clearly seen by two humans (whom tesla claims have inferior eyes), it could have taken the opportunity to change lanes and avoid the obstruction.
They want to pretend you'll only need to actually intervene in 'edge case' situations, but here we have an example of perfect conditions requiring intervention. Regardless of the buzzwords they can attach to whatever methods they are using, it doesn't feel like it works.
The issue with ‘driver in the loop only for emergencies’ is that it’s harder for someone to be prepared to react only occasionally (but still promptly and correctly) when outlier behavior happens, not easier.
For automated tools like CNC machines, there is a reason it’s just ‘emergency stop’ for instance, not ‘tap the correct thing we should do instead’.
But doing an e-stop on a road at speed is a very dangerous action as well, which is why that isn’t an option here either.
And you don't even know until it is too late even if you are paying attention. Suppose you see the obstruction well in advance. You expect the car to change lanes. You get past the point where you would have changed lanes but there's still time. You aren't yet aware that the car is missing the obstruction. A moment later you are much closer and make the judgement that the car is making a mistake. You grab the wheel but now it's too late.
The alternative is taking control of the car all the time. Something nobody is going to practically do.
Yeah, in many ways the only way a human can know for sure that it’s not seeing something is past the point of no return/the absence of action, which is always hard.
And why many "real" self-driving systems that share the road with other users are often quite speed-limited, to allow them to come to a full stop relatively safely if needed, and give a human time to take over properly. (E.g. the Mercedes system will stick way below the usual speed limits, which means it's main use currently is handling heavy traffic with stop-and-go, or many of the autonomous "busses" for dense city spaces/campuses/...)
Passively monitoring a situation continuously and reacting quickly when needed is something humans are not good at. Even with airline pilots, where we do invest a lot in training and monitoring, it's a well-understood issue that having to take over in a surprising situation often leads to confusion and time needed to re-orient before they can take effective action. Which for planes is often fine, because you have the time buffer needed for that, but not always.
There's sort of an analogous conundrum with Airbus-style automation: the system has various levels of automation and protection (e.g. preventing you from stalling the plane)
And then when something goes wrong, you are unceremoniously handed a plane with potentially some or all of those protections no longer active.
As an analogy, imagine your FSD car was trying to slow down for something, but along the way there is some issue with a sensor. So it gives up and hands control back to you while it's in the middle of braking, yet now your ABS is no longer active.
So now the situation is much more sudden than it would have been (if you had been driving the car you would have been aware of it and slowing down for it youself earlier in the game), you likely weren't paying as much attention in the first place because of the automation, and some of the normal protection isn't working.
I’ll defend airbus a little - there are flight laws that more or less provide at any given moment as much automation as is possible given the state of the sensors and computers. So it doesn’t just go ‘oops, a sensor failed, now you have direct control of the plane.’
It does have the same problem - if 99.999% of your flight time is spent in normal law you are not especially ready to operate in one of the alternate laws or god forbid direct law, which is similar to the case of a driver who perhaps accustomed to the system forget how to drive.
But I think we have a ways before we get there. If the car could detect issues earlier and more gradually notify the driver that they need to take control, most every driver at present retains the knowledge of how to directly operate a car with non-navigational automation (abs as you mentioned, power stearing, etc)
But I think there was some other example with an engine asymmetry (an autothrottle issue?) that the autopilot was fighting with bank, and eventually it exceeded the bank limit and dumped a basically uncontrollable aircraft in the pilots' lap. It would have been more obvious if you were seeing the yoke bank more and more. (Though it looks like this was China Airlines 006, a 747SP, which contradicts that thought.)
I agree that we can make the situation less abrupt for cars in some cases (though people will probably get annoyed by the car bugging them for everything going on)
It’s important to point out that airline pilots are trained to handle sudden emergencies. This has been incredibly successful at scale. But it came great expense of both money and lost lives. And it still isn’t perfect.
The level of training required to oversee full automation is non-trivial if you have to do more than press a stop button.
I've been using FSD lately on trips from CT to NYC and back (and the car performed well). My biggest fear has been debris (like shown in the video), or deer.
I sort of thought this was going to be a head on collision; it was a piece of debris in the highway that I think a lot of people may have hit. Still, we expect autonomous vehicles to do better than people, but it didn't hit a car or injure anyone
We've lost our sense making ability in our coverage of AV progress. Setting aside Elon's polarizing behavior (which is a big thing to set aside) all of these things seem true:
- Elon bullshits wildly and sometimes delivers
- FSD is far ahead of other available personally owned autonomy - no other system even attempts to drive on surface level streets. FSD works better than you would think - I've been in several flawless rides across town in conditions I didn't expect it to handle.
- FSD doesn't work well enough to rely on it yet, so what's the point for real consumers who don't want to sit there nervously hovering their hands over the wheel even if it is impressive and you might have to ride several hours before anything goes awry.
- We can't really know how close FSD is to being reliable enough because all we have is marketing claims from Tesla, fan boy clips on YouTube, and haters who can't seem to discern where FSD really is ahead, even as it falls far short of hype
What I wish we had was a common metric audited by a third party reliably published in terms of hours until disengagement or something like that across all systems in various conditions.
I rant about Elon a lot but can someone just explain to me how this keeps going on ? FSD is almost completely a solved problem by the likes of Waymo etc. Why does anyone care what Tesla is failing to do with FSD? Is this all about, "how can we invent FSD without lidar"? Why are we bothering, because cybertruck owners don't want a dorky box on top of their truck? Does their truck already not look ridiculous?
FSD isn't just about the lack of lidar, it's the only system that can be employed everywhere, without any prior special mapping, by private owners. So far, there are no other manufacturers of vehicles available in the US who are meaningfully competing in this space.
Its the only system the manufacturer is willing to let it be employed everywhere. I bet Waymo would work better everywhere but they are safety conscious and care about liability.
> I bet Waymo would work better everywhere but they are safety conscious and care about liability.
Except you'd need to map "everywhere" in high-fidelity 3D, save it somewhere in the car, and have it accessible near-realtime. The real reason Waymo can't service "everywhere" is that their approach doesn't scale.
And don't get me wrong - it's clearly a better service where it works (at this point in time), but it'll realistically only ever work in pre-mapped cities. Which of course would remove a ton of drivers and accidents, so still a win.
It's almost entirely a product/economic gamble. Basically:
"How do we get self-driving into millions of regular consumer cars without doubling the price by adding expensive sensors and redesigning the vehicle around huge chunky sensors"
Waymo is focused on Taxis for a reason because it's likely going to be unaffordable, except to people driving Lambos. But that also may be fine for a big chunk of the public who don't want to own cars (or want to rent their own as a service).
Some consumer car companies are experimenting with adding smaller LIDAR sensors like recent Volvo Ex90 which costs ~$100k. But they aren't as sophisticated as Waymo
Is LIDAR full of unobtainium? Is there some fundamental first-principles reason that the cost of LIDAR can't go down by orders of magnitude? Isn't that kind of reasoning from first principles what Elon's genius was supposed to be about, like with cost of accessing space? But apparently he thinks a spinning laser is always going to cost six figures?
That just seems like bad luck for the experiment. I've never seen anything like that driving on the highway. Was the claim that it could go coast to coast no matter what was thrown at it?
It would depend on how dense the point cloud is, however at 100ft I'm guessing the resolution for something like waymo would be on the order of 10-20cm, which would suggest it wouldn't be able to detect it (at a minimum it wouldn't know its height, and thus it wouldn't know if it's a piece of paper or a steel box). My guess is it also wouldn't know until it's right on top of it
Where did you get 10-20 cm? That sounds wildly off. Commercial lidar would get you to ~3-5 cm at 100 feet easily and Waymo's is said to be better than the commercially available units.
Yes. Everything that is/has a volume gets detected.
The problem with the camera thing is that the debris doesn't move, is quite small and is dark.
I suspect it overran it because of maybe a fix/patch on this behavior: https://www.reddit.com/r/TeslaFSD/comments/1kwrc7p/23_my_wit...
For those following along and curious about the tone and intensity of the criticism against Musk and Tesla (both here and in general) this blog post is a solid framework for understanding what drives many of the responses https://medium.com/incerto/the-most-intolerant-wins-the-dict...
It’s a huge chuck metal well over a foot long in the middle of a lane. The center of a lane is usually higher than sides and an uneven patch of road can cause a slight bounce before it, further reducing clearance. I’d worry about my RAV4 (8 inches) clearing it safely.
At first, I thought it was possibly a tire tread, which tend to curl up.
And here, the too-frequently posted excuse that "oh, many humans would have hit that too" is utter nonsense.
In that situation, with light traffic, clear daylight visibility, and wide shoulders, any human who would have hit that is either highly distracted, incompetent, or drunk.
Both driver and passenger saw the object at least 7-8 seconds ahead of time; at 0:00sec the passenger is pointing and they are commenting on the object, at 0:05sec passenger is leaning forward with concern and the Tesla drives over it at 0:08sec. The "Full Self Driving" Tesla didn't even sound any warning until a second AFTER it hit the object.
Any alert half-competent driver would have certainly lifted off the accelerator, started braking and changing lanes in half that time. They didn't because of the expectation the Tesla would take some corrective action — bad assumption.
"My 'Full Self Driving' is as good as a drunk" is not a worthwhile claim.
Worse yet, the entire concept of [it drives and then hands control to the human when it can't handle a situation] is actively dangerous to levels of insanity.
Human perceptual and nervous systems are terrible at tasks requiring vigilance —it is like our brains are evolved for attention to wander. Having a life-critical task that can literally kill you or others ALMOST fully handled autonomously is situation designed for the human to lost attention and situational awareness. Then, demanding in a split second that (s)he immediately become fully oriented, think of a reaction plan, and then execute it, is a recipe for disaster.
In this case, it is even worse. The Tesla itself gave the humans zero warning.
The driver and passenger saw the object well in advance of the Tesla and in in 3-4 times the time and distance it would take to react effectively. But, they had an assumption nothing was wrong because they assumed the Tesla would handle the situation and they were not in a driving mindset, instead waiting to see what the Tesla would do. They were not actively driving the car in the world. Fortunately, the only result was a mangled Tesla — this time.
> Tesla’s EV business is in decline and the stock price depends entirely on the self-driving and robot promises
I don't think that's right; I think the stock price entirely depends on people seeing it a vehicle to invest in Musk. If Musk died tomorrow, but nothing else changed at Tesla, the stock price would crater.
If it were only that TSLA is a way for retail investors to buy shares of ELON, Tesla wouldn't need to do a Robo taxi rollout in Austin to bolster claims that FSD is for real.
Elon can't levitate TSLA and other valuations by himself. There has to be at least the appearance of substance. That appearance is wearing thin. While I'm going to observe the caution that the market can stay irrational longer than I can stay solvent, once reality assert itself, Elon will be powerless to recreate the illusion.
The best selling model counts for something, but even with that Tesla just isn't a big car company when compared to the other major car companies.
They have the best selling vehicle by a little under 1%, with the Tesla Model Y just edging out the Toyota Corolla. But Toyota also has the 3rd best selling model (RAV4) that is about 7% behind the Model Y. And they have a third model in the top 10, the Camry, at a little over half the Model Y sales.
Just those 3 Toyota models combined sell about 30% more than all Tesla models combined.
Across all models Toyota sells 6 times as many cars as Tesla.
By number of cars sold per year Tesla is the 15th biggest car maker. The list is Toyota, Volkswagen, Hyundai-Kia, GM, Stellantis, Ford, BYD, Honda, Nissan, Suzuki, BMW, Mercedes-Benz, Renault, Geely, and then Tesla.
If we go by revenue from sales rather than units sold it is 12th. The list is: Toyota, Volkswagen, Hyundai-Kia, Stallantis, GM, Ford, Mercedes-Benz, BMW, Honda, BYD, SAIC Motor, and then Tesla.
Yet Tesla has something like 6 times the market cap of Toyota and around 30 times the market caps of VW and Honda. That's pretty much all hype.
So a thing has to be among the “world’s biggest” to be of note?
Of course not.
They don’t make as many vehicles or have the revenue of other auto manufacturers, but who cares.
What they do, they do very, very well. They lead to mass market EV adoption. Even if they crumble tomorrow their contribution is immense.
Who cares about market cap, it’s all just gambling.
I actually think the stock price would go up. His detour to fascism and megalomania has chased off tons of liberal environmentalists like myself that are the target audience for electric cars. I cancelled what would have been our replacement Tesla when he was implying on Twitter there can be good reasons for sneaking into the speaker of the house’s house and hitting her husband in the head with a hammer.
I think firing Musk would do wonders for Tesla as going concern but would be a disaster for the stock price.
I suspect a significant proportion of Tesla's stock price comes from people who are using it as a proxy for his other companies that the public can't invest in, primarily xAI (as all AI companies are in a horrific bubble right now) and SpaceX.
It may do this or that on the announcement, but if growth stops (arguably already has) and there is no hype for years, it's likely going to grind down to a normal valuation with time.
The passive investing / market cap weighted ETF complex tends to help big valuations stay big, but a company like Tesla still needs that sharp shot in the arm followed by frenzied buying occasionally in order to stay aloft (be it by traders, retail participants, shorts covering, etc).
I suppose they could replace Musk with another hype salesman, but the "hate" that Tesla gets is a big part of these upside shock cycles for the stock, because the ticker is a siren call for short sellers, who are ultimately guaranteed future buyers.
The issue is that if you look at Tesla as a normal car company (without an iconoclast CEO/personality), then you need to do normal P/E math, which is going to be sub-100 for sure.
Right now it’s easily double to triple that, even with Musks behavior.
Normal P/E math for a car company would put it in more like the 10-20 region - we’re not talking it being double to triple a sensible valuation but seriously something like 15 times…
The reality distortion field is at ATH. Tesla stock is nearing its high, which it had when it was growing quickly and very profitable. Now they are shrinking sales and margins and the stock is soaring again
I honestly don’t know if I would have seen and avoided that, it came up really fast. And based on the video it looked like a cardboard box or something not worth avoiding until it was within 2-3 seconds range.
It's easy to give you credit here—you would have seen it and avoided it. They saw it and had plenty of time to steer to the left, in the open lane, to avoid it.
It was at least six seconds between the co driver pointing it out and the car hitting the object. The screen shows 77mph, which means he saw it from approx. 200m distance. Anybody would at least be able to prepare or slow down in such a situation.
I use Tesla FSD 99% of the time. I recently drove from Austin to Florida and back.
The only time I had to take over was for road debris on the highway. Off the highway it’s very good about avoiding it. My guess is Tesla has not been focusing on this issue as it's not needed for robotaxi for phase one.
It doesn't have to do with the conditions. It has to do with whether or not it's on the highway or not. The car uses a different Stack for highway and off highway, the highway has been a step child for quite a long time now
That used to be true. Autopilot was for highways, FSD was for other roads. FSD can be enabled on both since v12 though and this video is specifically an attempt to use FSD on highways to go cross country.
A 99% reliability is no reliability. So per 100km I should expect 1 issue? Like multiple per week? FSD has to be much better than that to be trustworthy.
> The ùberphilosopher Bertrand Russell presents a particularly toxic variant of my surprise jolt in his illustration of what people in his line of business call the Problem of Induction or Problem of Inductive Knowledge (capitalized for its seriousness)—certainly the mother of all problems in life. How can we logically go from specific instances to reach general conclusions? How do we know what we know? How do we know that what we have observed from given objects and events suffices to enable us to figure out
their other properties? There are traps built into any kind of knowledge gained from observation.
> Consider a turkey that is fed every day. Every single feeding will firm up the bird's belief that it is the general rule of life to be fed every day by friendly members of the human race "looking out for its best interests," as a politician would say. On the afternoon of the Wednesday before
Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.
The Black Swan: The Impact of the Highly Improbable, Nassim Nicholas Taleb, page 40
What is the point of "testing" something that everyone knows doesn't work and only exists because a serial liar said so? If Musk says you will survive a fall from a 90-stories high-rise... don't test it.
We should rigorously test the things we’re most skeptical of.
You shouldn’t take pills a stranger gives you at a music festival without checking them for loads of things, for example. Even if you don’t ever intend on consuming them it’s nice to have specific accusations with evidence.
"is a Level 2 driver assistance system that requires constant supervision by a human driver" - the reason for human supervision might have something to do with uncommon situations (debris in road being such a situation).
Elon's estimates have always been off but it is irresponsible to see an obstacle up ahead and assume the computer would do something about it while the driver and passenger debate on what the said obstacle is. I am not sure if they were trying to win a Darwin Award and I say that as no particularly fan of Musk!
One of my biggest criticisms of Elon is that he rarely takes accountability for his words and nobody ever holds him accountable by asking directly. This isn't a high bar to clear, but some people continuously excuse him as being "overly-eager" in his predictions. If that was the case, he could still provide reasonable updates when a predicted date is missed along with an explanation, even if it's just: "Hey, it turns out this problem was much more difficult than we initially expected and it'll take longer". A lot of the problems that he's trying to solve are actually quite difficult, so it's understandable that predictions will be imprecise... But when you realize that your predictions are going to be wrong, you should have the basic decency to update people.
When you're wielding immense amounts of money, power, and influence I think it's worth trying to do the bare-minimum to hold people accountable for their words and claims. Otherwise your words are meaningless.
In America, you fail the second you apologize or take accountability. Ignoring criticism and deflecting all the time gets you further, as it is part of the game. Unfortunately, this is just an accepted social science-y thing at this point. It is a very much cultural thing of the past couple of decades.
Isn't the case in engineering cultures, like Boeing before they changed into a business culture.
Tesla doesn't cultivate an engineering culture. Tesla encourages a culture of lying. Some engineers have become so corrupted by it that they're willing to lie about things that there's no need to lie about, like quarter mile times of the Cybertruck:
https://www.motortrend.com/reviews/tesla-cybertruck-beast-vs...
https://www.youtube.com/watch?v=5J3H8--CQRE
The lead engineer on the Cybertruck sadly tried to defend the lie:
https://x.com/wmorrill3/status/1746266437088645551
They never even ran that quarter mile.
Is there any engineering culture left in the US?
I feel like this is the case across the board.
Yes, there is. In every place that I've worked, including my current position, acknowledging when you're wrong or have failed at something increases trust in you and your professionalism.
People who always have an excuse, try to shift blame, etc., are assumed to be lacking in competency (let alone ethics and trustworthiness).
My point is less around how engineers behave, and more around how organisations behave.
If an organisation is constantly retrenching experienced staff and cutting corners to increase earnings rather than being driven by engineering first, it doesn't matter what the engineers do amongst themselves. This culture, in fact, rewards engineers doing a bad job.
Not all organizations behave that way, though. If you reword my comment to indicate the company attitudes themselves, it still largely holds true.
I confess to a selection bias, because I won't work at a company that doesn't behave that way. Life is too short for that BS. However, that I maintain employment at the expected pay rates while doing so indicates that there are a lot of companies who don't behave the way you describe.
All that said, I certainly don't deny that there are also a lot of companies who do behave as you describe.
The point is not to have an excuse or shift blame, but just talk over the issue "tired of talking about $thing", and shift the conversation.
In America, you actually don't fail when you're wealthy.
It's too bad, because "I'm sorry; this is my fault" is the biggest diffuser of anger and best way to appease mad customers. Try it sometime; the other party goes from ready to kill you to apologetic themselves (if you're genuine). Unfortunately it's seen as a sign of weakness by people like Elon and his cult of impersonators and an admission of liability by the litigious crowd. If you can be strong, confident and ready to admit it when you're wrong you'll not only be successful in confrontational situations but also not a giant dick.
We once had a customer on a project that we'd messed up. I told the customer I was sorry about that and that I'd make an effort to fix the problem. I could see they were happy to hear that. But afterwards my manager called me at home and got mad I'd said sorry to them. His philosophy was never to apologize. Funny thing, later on that customer offered me a better paid position...
The Japanese seem to have this in their DNA.
This is far from universal in the US, but it's certainly true in certain circles.
Then we need to change that. Those with power are best-equipped to effect that change.
I think the problem is many of our senior leaders are just not that good, and the best they can do is model themselves on who they think is successful, like Musk. Then we get a predetermined outcome that repeats. Remember when every senior leader concluded that "Steve Jobs treated people like shit, but was very successful; therefore the path to success is treat people like garbage."? This was a global phenomenon for years. The "admitted failure is weakness" believe is much stronger.
That's a interesting take. What I have heard from a very old friend of my father is the opposite:
> Knowing when to say thanks and when to say sorry is the key for success.
...and I have used this piece of advice since then, it paid me handsomely. Of course, this doesn't allow you to be shameless, on the contrary, it requires to stick to your values as a prerequisite.
I think what allows Elon to behave like that is how he can retaliate without any repercussions since he has tons of money and influence in some circles.
Do you have sources?
Machiavellian
My biggest criticism of Elon is that he supports neo-nazis
And advocates "remigration" and other white nationalist slime on social media.
> One of my biggest criticisms of Elon is that he rarely takes accountability for his words and nobody ever holds him accountable by asking directly. This isn't a high bar to clear, but some people continuously excuse him as being "overly-eager" in his predictions.
I've written off pretty much everything he says since sometime before 2020, too many lies about self-driving to count.
But I'm not someone with any influence (nor do I really want that kind of attention).
> when you realize that your predictions are going to be wrong, you should have the basic decency to update people
Not to get too political, but the last I've heard of Elon Musk is that he was speaking to/mobilizing right wing extremists at a big protest in London. I am also pretty sure he has been trying to do similar things in other European nations (for whatever reason).
It seems to me that it is a bit late to plead for "basic decency" at this moment.
But at the same time let me ask you, what has he got to lose? What financial or reputational risk is he taking by not taking any accountability?
Society needa a "no assholes" policy in order to syay high trust. Elon not being a pariah because of his grifting is a sign the US is becoming a lower and lower trust society. And its billioniares making it so
This would fly in the face of the smoke and mirrors that props up his meme stock.
He lies relentlessly even to customers who paid for the product.
I know because I’m one of them. FSD paid in full almost seven years ago, still does absolutely nothing in Europe. A five-year-old would do a better job at driving because the Tesla can’t even see speed limit signs correctly.
Tesla takes no responsibility for their misleading marketing and years of lies. Most recently Musk promised in early 2025 that these old cars would get a hardware update that will finally enable the paid-for FSD (as if…) The company itself pretends to know nothing about this latest promise made by its CEO.
It’s insane that a business with a trillion-dollar market cap operates like this. It seems to be more of a cult than a real company.
I think Elon would say that this is a regulatory problem because you guys don't have the same signs or distance units as in the US.
So why take money for it? And computer vision is well past the stage where the information can be read from these signs, theres enough training data. So thats not a real hurdle. If different road geometry or traffic customs/rules is the issue then just admit that FSD cant generalize like a human and is overfit to the us. Why lie and pretend its almost human level?
It is purely a regulatory problem. FSD is not available in Europe due to the slow pace of regulatory approval.
For instance UNECE regulation 79 prevents FSD from using its full capacity to turn the steering wheel.
It's a rat's nest of red tape.
How do EU regulations prevent my car from recognizing speed limit signs?
The real problem is that Tesla sold this in Europe, but never put the slightest effort towards actually developing the localized ML model that could make it work. (Of course the hardware sucks too, but they could have done a much better job with it.)
The reason is because mobileye patented (US20080137908A1) it and won't let tesla use it.
If that's the case, why didn't Tesla just issue refunds to us customers in Europe?
If it's really due to a patent, that just makes it worse. Tesla has known all along that they can't deliver the product they sold.
I was being a bit factious (related to Elon's previous bullshit claims that it was only a regulatory issue that everybody's Tesla can't moonlight as a robotaxi by the end of some year in the past...sure Elon).
>> same signs or distance units
yeah, how could we expect software developers to write code that can replaces "z" with "s", handle extra "u"s and divide numbers by 1.6? </s>
Why did you buy FSD if you are in Europe?
I will never understand why people ask why those who bought a thing expected to get that thing
Isn't that the bare minimum requirement for how commerce is supposed to function?
A surprising chunk of HN commenters earnestly believe that "caveat emptor" should be some kind of horrible default way of life. Like businesses should be able to sell anything they want, without regulation, as long as they can convince people to buy those things. And if those things don't work--well it's 100% the customer's fault for being so gullible and not being responsible for the company's quality.
I don't understand why people would want to live that way and argue against those who fight for better.
But my guess is that it's a defense mechanism, essentially the Just World fallacy. "It would really suck to have that bad thing happen to me through no fault of my own. *spoink!* I'm careful (and smart) therefore bad things won't happen to me" (https://dilbert-viewer.herokuapp.com/1999-09-08)
Why did Tesla sell a 7,500 euro feature in Europe if they knew it’s never going to work?
My mistake was assuming this company had the slightest decency to sell things they could actually deliver.
> because the Tesla can’t even see speed limit signs correctly.
This is sad and atrocious. Not only a Ford Puma (an econobox compared to a Tesla) can read almost all speed limit signs correctly, it can pull speed limit data correctly from its onboard maps when there are no signs. These maps can be updated via WiFi or an on board modem too.
Tesla mocked "big auto industry", but that elephant proved that it can run if it needs to.
Interestingly with SpaceX he is much more willing to change plans. With SpaceX he and SpaceX seem to be searching for the right solution.
For self driving, he simply decided X is right and talked about exponentials and no matter how many time it fails, there is no reflection what so ever.
He's a psychopath, in the sense he doesn't feel normal emotions like remorse and empathy. He will lie to your face to get you to buy his product and when it fails to deliver on promises he will lie again.
> One of my biggest criticisms of Elon is that he rarely takes accountability for his words and nobody ever holds him accountable by asking directly.
On one hand, I agree with you that this is maddening.
On the other hand this is, as the kids would say, "the meta" in 202X for business and politics.
Lying at scale and never taking responsibility for anything obviously works exceptionally well, and not just for Elon. I wish it didn't, but it does.
A friend of mine who loves Tesla watched this video and said "many humans would have hit that". I feel we'll be hearing a lot of that excuse.
Well, I have to admit that your friend has a point. Humans are bad at reacting quickly and correctly to unexpected situations, and some debris large enough to damage your car showing up from out of nowhere after several hours of boring driving along a largely straight highway with little traffic is definitely one of these situations. But a self-driving system worth its salt should always be alert, scanning the road ahead, able to identify dangerous debris, and react accordingly. So, different pair of shoes...
I'm not convinced. The debris is clearly visible to the humans a long way off and the adjacent lane is wide open. Avoiding road debris is extremely common even in more congested and treacherous driving conditions. Certainly it's possible that someone texting on their phone or something might miss it, but under normal circumstances it could have been easily avoided.
Even if there wasn't space to swerve, a human would've hit the brakes and hit it much slower
I feel Waymo's LIDAR might have worked here
100% it would have. One of the main things the LiDAR system does is establish a "ground plane", which is the surface on which the car is expected to drive. Any hole or protrusions in that plane stick out like a sore thumb to a LiDAR system, you'll be able to see it in the raw data without much of a feature detector, so detecting them and reacting is fast and reliable.
Contrast with Tesla's "vision-only" system, which uses binocular disparity along with AI to detect obstacles, including the ground plane. It doesn't have as good a range, so with a low- profile object like this it probably didn't even see it before it was too late. Which seems to me a theme for Tesla autonomy.
In addition to detecting the object, Waymo has to make some determination about the material. Rigid heavy metal = slam on the brakes and/or swerve. Piece of tire or plastic bag = OK to run over if swerving or hitting the brakes would be more dangerous. Really hard problem that they're concerned about getting right before they open up highway driving.
LiDAR is also good for that because you can measure light remission and figure out how much of the LiDAR energy the material absorbed. Different materials have different remission properties which can be used to discriminate. Which is a compounding advantage because we tend to paint road line markers with highly reflective paints. This makes line markers blindly obvious to a LIDAR.
Elon's Musk believes that sensor fusion doesn't work and is a fake science.
This is why FSD is still shit in late 2025 and drives like it's drunk.
It's great that AI doesn't care about your feelings tho.
The humans in the vehicle spotted it fine, and we should not tolerate self-driving systems that are only as good as the worst of us.
There’s thousands of examples where FSD seen something people did not.
Sad to see HN to give up to mob mentality.
> and we should not tolerate self-driving systems that are as good as the worst of us
The person you replied to didn't do that, though:
> But a self-driving system worth its salt should always be alert, scanning the road ahead, able to identify dangerous debris, and react accordingly. So, different pair of shoes...
They never said they did.
I mean, those were direct quotes and it was literally half of their comment, but, sure, "they never said they did".
I think they meant the person you were responding to never claimed that the person they were responding to said that we should tolerate self driving systems that are no better than the worse of us, not that the person that the person you were responding to was responding to never said the thing you very clearly directly quoted.
I think you might have misunderstood someone here. The person you quoted made a generic statement about what we should expect from an autonomous vehicle, but never said (nor implied imho) that the person he responds to didn't expect the same.
Humans routinely drive from LA to NYC without wrecking their cars. In fact, that's the normal outcome.
I've been driving on local and highway roads for 30 years now and I have never come across a piece of debris so large that driving over it would damage my car. Seeing that video, I don't have high confidence that I would have dodged that hazard - maybe 70% sure? The thing is, usually there is plenty of traffic ahead that acts very obviously different in situations like this that helps as well.
All that to say that I don't feel this is a fair criticism of the FSD system.
> I have never come across a piece of debris so large that driving over it would damage my car
More likely you simply drove around the debris and didn't register the memory because it's extremely unlikely that you've never encountered dangerous road debris in 30 years of driving.
I think it's probably because of mostly driving in enough traffic that other cars would have encountered any critical objects first and created a traffic jam around an impassable section.
Unless you're on your phone, with that clear of a view and that much space, 100% you would dodge that, especially in a sedan where your clearance is lower than a truck.
No way. I call in road debris on the freeway once every couple of months. People swerve around it and if it’s congested, people swerving around it create a significant hazard.
You’ve been driving for 30 years and have never seen a semi truck tire in the middle of the road after it ripped off the rim of a truck?
Honestly no, not in the middle of the road, but plenty on the side. The only things I come across in the middle of the roads are paper bags or cardboard for some reason.
But also, I doubt you would break your swaybar running over some retreads
Driving I-5 up to Portland I had to dodge a dresser that was standing upright somehow in the middle of the lane. The truck in front of me moved into the other lane revealing that thing just standing there, I had to quickly make an adjustment similar to what this tesla should have done. Teslas also have lower bellys, my jeep would have gone over the debris in the video no problem.
30 years of driving doesn't equate same amount of miles driven.
Probably good parable for Waymo vs Tesla here. One is generalized approach for entire world while another is carefully pre-mapped for a small area.
> All that to say that I don't feel this is a fair criticism of the FSD system.
Yes it is because the bar isn't whether a human would detect it, but whether a car with LiDAR would. And without a doubt it would, especially given those conditions: clear day, flat surface, protruding object is a best case scenario for LiDAR. Tesla's FSD was designed by Musk who is not an engineer nor an expert in sensors or robotics, and therefore fails predictably in ways that other systems designed by competent engineers do not.
I don't disagree with that characterization of the technical details. However I felt the task those drivers set out was asking a different question: how good would the FSD system be at completing a coast-to-coast trip? I don't think this can be answered after hitting a singular, highly unlikely accident without a lot more trials.
Imagine there was a human driver team shadowing the Tesla, and say they got T-boned after 60 miles. Would we claim that human drivers suck and have the same level of criticism? I don't think that would be fair either.
If you don't disagree on the characterization of the technical details, then you must realize how very fair it is for us to criticize the system for failing in the exact way it's predicted to fail. We don't need 1000 more trials to know that the system is technically flawed.
What if there is no debris the other 999 times, and the system works fine? The video does not give me that information as a prospective Tesla customer. This looks like a fluke to me.
Those 999 other times, the system might work fine for the first 60 miles.
This is a cross-country trip. LA to New York is 2776 miles without charging. It crashed the first time in the first 2% of the journey. And not a small intervention or accident either.
How you could possibly see this as anything other than FSD being a total failure is beyond me.
>asking a different question: how good would the FSD system be at completing a coast-to-coast trip?
>They made it about 2.5% of the planned trip on Tesla FSD v13.9 before crashing the vehicle.
This really does need to be considered preliminary data based on only one trial.
And so far that's 2.5% as good as you would need to make it one way, one time.
Or 1.25% as good as you need to make it there & back.
People will just have to wait and see how it goes if they do anything to try and bring the average up.
That's about 100:1 odds against getting there & back.
One time.
Don't think I would want to be the second one to try it.
If somebody does take the risk and makes it without any human assistance though, maybe they (or the car) deserve a ticker-tape parade when they get there like Chas Lindbergh :)
> This really does need to be considered preliminary data based on only one trial.
Statistically yes, but look at the actual facts of the case.
A large object on the road, not moving, perfect visibility. And the Tesla drives straight into it.
Not hitting static objects in perfect visibility is pretty much baseline requirement #1 of self driving. And Tesla fails to meet even this.
It does look like lower performance than a first-time driving student.
I really couldn't justify 1000:1 with such "sparse" data, but I do get the idea that these are some non-linear probabilities of making it back in one piece.
It seems like it could easily be 1,000,000:1 and the data would look no different at this point.
> What if there is no debris the other 999 times, and the system works fine?
This argument makes no sense. I take it that you're saying that if we provide the Tesla a road which contains nothing to hit, it won't hit anything?
Well, sure. Also, not interesting.
In a real world drive of almost 3000 miles there will nearly always be things to avoid on the way.
> I take it that you're saying that if we provide the Tesla a road which contains nothing to hit, it won't hit anything?
Not quite. I am saying that basing the judgment on a rare anomaly is a bit premature. It's a sample size of 1, but I base this on my own driving record of 30 years and much more than 3000 miles where I never encountered an obstacle like this on a highway.
> Also, not interesting
I would have liked to see the planned cross-country trip completed; I think that would've provided more realistic information about how this car handles with FSD. The scenario of when there is a damn couch or half an engine on the highway is what's not interesting to me, because it is just so rare. Seeing regular traffic, merges, orange cones, construction zones, etc. etc. now that would have been interesting.
Here's an edge case I'm sure everybody has seen very rarely, but that's still not as uncommon as you think. Watch the video by Martinez if the top video is not correct:
https://abc13.com/post/loose-spool-involved-in-crash-near-be...
This was the 5th time in two months.
Now 2018 might have been a record year, but there have been a number of others since then.
Fortunately for us all, drivers don't have to go through Houston to get from CA to NY, but you're likely to encounter unique regional obstacles the further you go from where everything is pre-memorized.
As we know 18-wheelers are routinely going between Houston and Dallas most of the way autonomously, and a couple weeks ago I was walking down Main and right at one of the traffic lights was one of the Waymos, who are diligently memorizing the downtown area right now.
I'll give Tesla the benefit of the doubt, but they are not yet in the same league as some other companies.
Tesla in 2016: "Our goal is, and I feel pretty good about this goal, that we'll be able to do a demonstration drive of full autonomy all the way from LA to New York, from home in LA to let’s say dropping you off in Time Square in New York, and then having the car go park itself, by the end of next year" he said on a press call today. "Without the need for a single touch, including the charger."
Roboticists in 2016: "Tesla's sensor technology is not capable of this."
Tesla in 2025: coast-to-coast FSD crashes after 2% of the journey
Roboticists in 2025: "See? We said this would happen."
The reason the robot crashed doesn't come down to "it was just unlucky". The reason it crashed is because it's not sufficiently equipped for the journey. You can run it 999 more times, that will not change. If it's not a thing in the road, it's a tractor trailer crossing the road at the wrong time of day, or some other failure mode that would have been avoided if Musk were not so dogmatic about vision-only sensors.
> The video does not give me that information as a prospective Tesla customer.
If you think it's just a fluke, consider this tweet by the person who is directing Tesla's sensor strategy:
https://www.threads.com/@mdsnprks/post/DN_FhFikyUE/media
Before you put your life in the hands of Tesla autonomy, understand that everything he says in that tweet is 100% wrong. The CEO and part-time pretend engineer removed RADAR thinking he was increasing safety, when really he has no working knowledge of sensor fusion or autonomy, and he ended up making the system less safe. Leading to predictable jury decisions such as the recent one: "Tesla found partly to blame for fatal Autopilot crash" (https://www.bbc.com/news/articles/c93dqpkwx4xo)
So maybe you don't have enough information to put your life in the hands of one of these death traps, but controls and sensors engineers know better.
Come to Chicago, you'll see some debris in the first few days, and plenty of humans dodging it at high speeds
I'd imagine the highway doesn't normally have a debris that's nearly the same color of the surface of a highway.
I encounter such things a few times a year. Usually a retread that has come off a truck tire.
The highway often has debris on it at night which can be even harder to see
I wonder if a non-visual sensor could make the distinction.
> Well, I have to admit that your friend has a point. Humans are bad at reacting quickly and correctly to unexpected situations, and some debris large enough to damage your car showing up from out of nowhere
I read this comment before seeing the video and thought maybe the debris flies in with the wind and falls on the road a second before impact, or something like that.
But no, here we have bright daylight, perfect visibility, the debris is sitting there on the road visible from very far away, the person in the car doing commentary sees it with plenty of time to leisurely avoid it (had he been driving).
Nothing unexpected showed up out of nowhere, it was sitting right there all along. No quick reaction needed, there was plenty of time to switch lanes. And yet Tesla managed to hit it, against all odds! Wow.
My impression of Tesla's self driving is not very high, but this shows it's actually far worse than I thought.
Literally the passenger saw it and leaned it, the driver grabbed the steering wheel to brace himself it seems. That object on the road was massive, absolutely huge as far as on road obstacles go. The camera does not do any justice - it looks like it's 3 feet long, over a foot wide, and about 6 or 7 inches high laying on the road. Unless a human driver really isn't paying attention, they're not hitting that thing.
> Well, I have to admit that your friend has a point. Humans are bad at reacting quickly and correctly to unexpected situations,
This was not one of those situations.
and some debris large enough to damage your car showing up from out of nowhere after several hours of boring driving along a largely straight highway with little traffic is definitely one of these situations.
Again, this was definitely not one of those situations. It was large, it was in their lane, and they were even yapping about it for 10 seconds.
> But a self-driving system worth its salt should always be alert, scanning the road ahead, able to identify dangerous debris, and react accordingly. So, different pair of shoes...
This is what humans already (and if we didn't do it, we'd be driving off the road). Based on what you're saying, I question that you're familiar with driving a car, or at least driving on a highway between cities.
And a self-driving car should have better sensors than a human (like lidar)
> I have to admit that your friend has a point
Do they? "Many humans" would hit that? The humans in the car spotted the debris at least 8s before the impact. I don't think any humans would hit that in broad daylight unless they were asleep, completely drunk, or somehow managed to not look at the road for a full 10s. These are the worst drivers, and there aren't that many because the punishment can go up to criminal charges.
The argument that "a human would have made that mistake" backfires, showing that every Tesla equipped with the "safer than a human driver" FSD is in fact at best at "worst human driver" level. But if we still like the "humans also..." argument, then the FSD should face the same punishment a human would in these situations and have its rights to drive any car revoked.
Or they would hit it if they were busy fiddling with the car's autodrive system. These humans would have avoided it had they not wasted time speculating about whether the autodrive system would save them. They would have been safer in literally any other car that didnt have an autodrive.
Bullshit. Some humans might hit thay because they werent paying attention, but most people would see that, slow down and change lanes. This is a relatively scenario that humans deal with. Even the passenger here saw it in time. The driver was relying on FSD and missed it
I dont think FSD has the intelligence to navigate this
When the self-driving car killed a pedestrian several years ago, the initial sentiment on this site for the first few hours was essentially "those dastardly pedestrians, darting into traffic at the last second, how are you supposed to avoid them?" It took several hours for enough information to percolate through to make people realize that the pedestrian had been slowly and quite visibly crossing the road and the self-driving car (nor the safety driver) never did a thing to react to it.
Another thing to keep in mind is that video footage is much lower quality than what we can see with our human eyeballs. At no point in the video can I clearly identify what the debris is, but it's clearly evident that the humans in the car can, because they're clearly reacting to it seconds before it's even visible to us in the dash-cam-quality footage. I will freely accept that many drivers are in fact bad drivers, but a carcass (I think?) on a lane visible for >10 seconds away is something that anyone who can't avoid needs to have their license revoked.
(Assuming I know which accident you're referring to) The car that killed the pedestrian in Florida wasn't using supervised full self driving, he was using autopilot (which was basically adaptive cruise control at the time).
No, this was the Uber that ran over the homeless women in Arizona.
I'm sorry I thought we were just talking about Tesla's, my bad.
I don't love Tesla (though I would like an electric car). I don't think it's unlikely that someone driving could have hit that or caused an even worse accident trying to avoid it.
However, I'm sure a human driver in the loop would have reacted. The driver sitting there watching a machine do it's thing 99% of the time and being expected to step in to save that situation though is a horrific misreading of human nature.
It's their standard go-to excuse. "I would have done that too" really says more about them as drivers than anything else.
Those humans saw the debris. What happens next when a human is actively at the wheel is that the driver should look at all mirrors, decide whether to change lane or brake, execute. Or anything else that could lead to a movie like multiple car accident. Hitting the debris is the least dangerous line of conduct if there are cars all around. That looked like an empty road but who knows.
By the way, playing a lot of racing videogames is a great training for dealing with that sort of stuff, except maybe getting good at mirrors. I've been in a few dangerous situations and they were only the 10th thousand averted crash. Nothing to think, only reflexes.
I highly recommend people take a high performance/race driving course if they can. I did a single day one which involved high speed maneuverability trials designed to be useful in emergency scenarios (swerving, braking, hard turns) followed by a few laps around a racetrack.
It's one of the best ways to figure out what it feels like to drive at the limits of your car and how you and it react in a very safe and controlled environment.
Did you friend make any mention that the passenger saw it hundreds of feet away and even leaned in as they headed directly towards it? The driver also recognized it and grabbed the wheel as if to say "brace for impact!".
Obviously, in this particular case the humans wouldn't be hitting that. The people in the video have clearly seen the object, but they didn't want to react because that would have ruined their video.
Even if they did not understand what it is, in the real world when you see something on the road you slow down or do a maneuver to avoid it, no matter if it is a harmless peace of cloth or something dangerous like this. People are very good at telling if something is off, you can see it in the video.
I wouldn't. And I'm not that great a driver. The Tesla should be significantly better than me.
Humans would have slowed down at least. After watching it many times, that shadow in front is too large to warrant a concern.
Sure and many humans have no business having a driver's license.
> A friend of mine who loves Tesla watched this video and said "many humans would have hit that".
The very same video demonstrates this is not true, since the human in the video sees the debris from far away and talks about it, as the self-driving Tesla continues obliviously towards it. Had that same human been driving, it would've been a non-issue to switch to the adjacent lane (completely empty).
But as you said, the friend loves Tesla. The fanboys will always have an excuse, even if the same video contradicts that.
Question - isn't P(Hitting | Human Driving) still less than P(Hitting | Tesla FSD) in this particular case [given that if this particular situation comes up - Tesla will fail always whereas some / many humans would not]?
The humans in the video spotted it far in advance. I think that any human watching the road could easily avoid that.
Clearly not, because they hit it and didn't even touch the wheel until they were basically on it. It can be hard to tell not just when something is there, but whether it's thick enough to warrant changing lanes
These influencers are incentivized to not touch the wheel because then that ruins the goal of their video.
Did you notice the right side passenger?
The question is if avoiding the obstacle or breaking was the safest thing to do. I did not watch the entire test, but they are definitely cases where a human will suddenly break or change lanes and cause a very unsafe condition for other drivers. Not saying that was the case here, but sometimes what a human would do is not a good rule for what the autonomous system should do.
An enormous part of safe driving is maintaining a mental map of the vehicles around you and what your options are if you need to make sudden changes. If you are not able to react to changing conditions without being unsafe, you are driving unsafely already.
This is an important aspect of why "supervised" self driving is much more dangerous than just driving.
Yes. Humans would. Which is why the car should be able to handle the impact. My honda civic has had worse without issue. The suspension should be beefy enough to absorb the impact with, at worst, a blown tire. That the car has broken suspension says to me that teslas are still too fragile, biuld more like performance cars than everyday drivers.
With millions of Teslas on the road one would think if that was true we would have heard something by now. My absolute worst car quality wise ever was a Honda Accord. And I owned shitty cars including a Fiat. My most reliable car was a Honda Civic before I “upgraded” to a brand new Accord. I abuse my Tesla and so far no issues driving in one of the worst roads in the country. I must hit 100 potholes per month and blew a tire already. It’s not a fun car to drive like a GTI (which I own as well) but it’s definitely a solid car.
Cars with "bad" suspension tend to survive potholes. A car with slow-to-move suspension will see the wheel dip less down into the hole when traveling at speed. But that is the exact opposite behabior you want when dealing with debris, which requires the supension to move up rather than down. "Good" systems will have different responce curves for up than down. Quazi-luxury cars fake this by having slow suspension in both directions, to give the sense of "floating over potholes".
[Cut, google ai provided wrong numbers]
A human did hit that..
I guess the point was the human was intentionally letting the car do its thing, though.
With their hands on the steering wheel and foot next to a brake and two people looking out the window.
Are you saying "they let the car drive on its own but were still paying attention"?
Not sure how to take this reply.
That a human was still in the loop in addition to a computer and both missed it.
> That a human was still in the loop in addition to a computer and both missed it.
Listen to the audio in the video. The humans do see it and talk about it for a long time before the car hits it. Had a human been driving, plenty of time to avoid it without any rush.
They do nothing to avoid it presumably because the whole point of the experiment was to let the car drive, so they let it drive to see what happens. Turns out Tesla can't see large static objects in clear daylight, so it drives straight into it.
But that's not what happened.
They saw it, called it out, and very deliberately let the car (not) deal with it because that was the point of what they were doing.
They did not "miss" anything.
That's laughable. Any human who couldn't avoid a large, clearly-visible object in the middle of an empty, well-lit road should not be operating a vehicle.
That's not to say that there aren't many drivers who shouldn't be driving, so both can be true at once, but this is certainly not a bar against which to gauge autonomous driving.
I would counterpoint my little cheap Civic has hit things like that and hasn't broken a thing. HEH.
The best time was a very badly designed speed bump that I didn't even hit at high speed but one side was ridiculously inclinded vs. the other so the entire civic's frontend just smashed right into the pavement and dragged for 3 feet. I wouldn't be surprised if I went into a body shop and find the frontend tilted upwards by a few centimeters. lol
Timestamp 8:00-8:30. Your Civic is not hitting that and surviving any better than the Tesla. It just got luckier. It may be easier to get lucky in certain vehicles, but still luck based.
They are right.
Yes many human drivers would hit it. The bad ones. But we should want driverless cars to be better than bad drivers. Personally, I expect driverless cars to be better than good drivers. And no, good drivers would not hit that thing.
Anecdotal: I am surprised how the basic Tesla autopilot often cannot even read the speed limit signs correctly. In perfect lighting conditions. It just misses a lot of them. And it does not understand the traffic rules enough to know when the speed limit ends.
I know that the basic autopilot is a completely different system than the so-called FSD.
But equipped with that experience from the basic autopilot, it does not surprise me that a large debris on the road was completely missed by the FSD.
In FSD, there's an annoying bug where Georgia minimum speed signs are misinterpreted as speed limit signs. It's because most states that do minimum speed signage combine it with the speed limit into a single sign, but Georgia has separate, freestanding minimum speed signs. Thankfully, recent versions of FSD don't immediately start slowing down anymore when a misinterpreted sign makes the car think that the speed limit on a highway is 40; but the sign comprehension bug has remained unresolved for years.
I use autopilot for local driving (city - suburbs) and I pay for FSD when on long road trips (>300 miles). You are correct, they are completely different things so one doesn’t correlate to the other one.
That they are different things is really disappointing. If you want people to trust the system enough to buy FSD, the autopilot mode should use the same system, with limited functions. There is no reason why the vision/detection systems should be different. Especially if you already have the proper hardware installed…
Completely agree. Like a rate limiting model the way LLMs do. Where you get like 5 FSD drives on the free tier per month or something
Tangentially - If you as a European happen to drive on US highways, you will noticed that they are heavily littered with fallen cargo, aluminum ladders, huge amount of shredded tires and occasionally a trailer without a towing car... It has been so bizarre for me to observe this. Nobody is cleaning that?
I just got back from a trip to the USA where I spent about five days driving around Michigan, Illinois, and Wisconsin and the number of shredded truck tires on the highways was flabbergasting.
From what I understand, in the USA when a truck tire wears down they put a new layer of tread on it. But it doesn't seem to work very well as it eventually peels off.
Those are called retreads and they are not uncommon worldwide. If you're seeing anything other than long thin strips of tread on the road it's not a retread related failure.
Every now and then the Karens get to screeching about it and it reaches a critical mass and the NHTSA does a study and they find that most of the debris you're seeing has nothing to do with retreads. Here's the summary of a recent one:
https://www.moderntiredealer.com/industry-news/commercial-bu...
Retredding like this is also entirely normal in Europe. I guess they just have more trucks?
Retreading is legal but only specified companyes with its own E mark legally can do it. And combine that with more in depth 12 or 6 months inspection for > 3.5t trucks it usually means that the tires are in better condition.
Truck tire chunks on freeways was one of the biggest surprises to me.
I hit a retread as it became detached from the lorry I was following on the M25, UK. Scary moment, similar to the video in TFA, + an expensive repair job.
I'm from Norway, and visited a buddy in Florida back in the early 2000s. It was my first time to the US.
I recall I was completely flabbergasted by all the cars just ditched along the highway. There were lots of them, just off the road into the grass on the side or whatever.
I asked my buddy about it and he said it was usually tires, as it was cheaper to buy another car than get new tires... Well that didn't help my blown mind one bit.
Mind you, on the way to his house I passed a Kia dealer which literally had a huge "buy one car - get one free" sign outside...
When I was a boy in Florida in the 1970s, there was an annual inspection for automobiles. Some other states still do this. It would certainly improve overall safety on the roads if we still had any minimum requirements.
the cash for clunkers program put a huge dent in the ultra cheap used car market. but also Florida has basically no car laws.
I find that open bed pickup trucks contribute a lot to stuff on the freeways.
Your rake is much less likely to fall out of a van than out of the bed of a pickup.
So if stuff can't fall out, it won't get cleaned up.
Second, this is a metal ramp used to load vehicles on a trailer (think bobcat-like).
To tow a trailer like that in Europe requires additional licenses, which comes with training around tying down EVERYTHING and double checking.
In the USA you are allowed to drive with this with the same license you need to drive a Smart Car.
> Nobody is cleaning that?
In the linked video highway patrol comes out to remove the debris just a few minutes after they hit it. (highway patrol had been called out prior them hitting it)
It's been years since I've seen anything you couldn't drive over that wasn't in the travel lane.
Around here anything big enough to matter gets reported, the cops fling it to the side of the road and it gets picked up on a schedule because they always pick things up a few weeks before they mow (presumably because hitting garbage isn't great for the mowers).
It depends where in the US you drive. It’s a big country with independent state governments. It’s like saying I was driving in Romania and I was shocked by how bad European highways are. I lived in Texas and the stuff I saw on the highway was crazy, vacuum cleaners, decorated Christmas trees and refrigerators. Most parts of the country interstate and highway systems are pretty clean.
Romania has pretty good highways now.
Belgium would be a better example of terrible roads
I lived in two parts of California, and New Jersey for almost three decades and I've traveled to a lot of the US for leisure or work.
I honestly can't recall ever feeling like I was going through a markedly different area in either better or worse directions.
When it comes to road conditions it's often comical how different roads can be just across state or county lines. From almost completely destroyed roads barely holding together to a perfectly clean and flat asphalt road. Where I have noticed more trash off highways is rural towns or dense urban areas that run right alongside highways. I definitely noticed more trash in the south (Texas/Oklahoma/Louisiana) than north (Iowa, Minnesota, Michigan).
It is clearly designed to test the self driving...
Why are Dutch or Swiss highways so much more pristine than French and German ones?
Have you compared distances?
German ones are still way way cleaner than US highways.
Germany is also the country with the most traffic because it's kind of at the center of everything and pays roads with income tax instead of tolls.
The French charge really high tolls and have very little traffic compared to Germany. They really don't have an excuse.
We do have road tolls for trucks in germany. It's been like that for quite some time.
> They really don't have an excuse.
Oh but we do. Most of the state-owned motorways have been sold off to conglomerates about two decades ago, dividends and exec comps have to come from somewhere.
Excuses is pretty much all the French have tbh..
in germany trucks pay tolls for highways. Also all the gas is taxed and it goes into the federal budget that is then used to finance highways, so i would say everybody filling up is financing highways.
Austria has road tolls and similar tax on gas, so do France and Italy. Actually Diesel is even more expensive.
So still the cheapest roads to be on for foreigners.
You don't need to drive far, just get on a US highway and there is dangerous litter every few hundred meters. In extreme cases it goes down to few dozens meters. Sometimes it was like driving in some Mad Max movie.
Litter? Sure. Dangerous litter? Every few hundred meters? No. Not sure where you're driving, but no, that's not in general the way US highways are.
I mean, I have seen some dangerous things. Not at the rate you describe, though. Not even close.
> Why are Dutch or Swiss highways so much more pristine than French and German ones?
??
They are not..
Yes, people call it in and cops clean it up. We just have a lot of roads and people that poorly packed stuff in trucks
... cops? There's no, like, highway traffic authority with their own maintenance force to take care of this?
There usually nominally is, and they may do some of the work, but overfunding law enforcement and then having them cover work that should have been done by other agencies that whose funding has been limited by the drive to stuff more money into LE is a pretty common pattern for US state and local governments.
But also the dominance of car culture in the US is such that the principal state police force may actually be principally designed as a component of the highway traffic authority, as is the cass in California where the Highway Patrol was always larger than and in 1995 absorbed the State Police.
There is in most states. There are often signs periodically telling you what phone number to call to report such hazards. If possible, it's better to call that number than 911, but if you don't know the number, 911 will do. They'll just forward your report to that agency anyway.
A ladder in the road is a legitimate emergency. If you call 911 to report a ladder in the road, they will direct the issue to the relevant agency (which will be the state police, in all likelihood, because they will need to close a lane).
I'm sorry. Could you repeat that? I couldn't make out what you said clearly?
All I heard was "taxes".
/s
On a more serious note, in the US we generally go in the direction of fewer services rather than more services. It, of course, leads to massive inefficiencies, like police removing shredded tires. But it's very difficult here, politically speaking, to get Americans to agree to add government services.
It greatly depends on where you are.
But yes there are folks whose job it is to clean that stuff up.
You might have also noticed that the pavement is terrible. Mostly this comes down to the fact that in Europe the fuel taxes 3-4x ours.
Fuel tax is a big factor, but US has a lot of road. The US has 3x the amount of paved surface vs. Germany. European winters are also milder than the US. I'm not sure how many European roads would survive going from -10 to 140 like they do in the midwest
Actually, that's surprisingly little roads in the US. The population is significantly larger (more than 3x), so on a per person basis roads in Germany see much more use than in the US. And roads there are in a much better state than in the use despite the higher usage rate.
So also as a consequence of this: If the US were to use the same per person $ amount to upkeep their roads, US roads would have WAY more money to be maintained. Yet, the outcome is obviously worse.
Doh! I left out a key detail. The US has 3x the paved surface per person!
Also more VMT, which would tend to balance the excess roads, because more driving leads to more fuel tax revenue. USA has more than double the VMT per capita of Germany. If the fuel tax was appropriately set, then this factor would compensate for the greater sizes of the roads.
Not just that but also tolls. There are way more toll roads, at least where I've lived in Europe, compared to where I've lived in the US (Spain being the one very noticeable exception between France and Portugal).
Yeah, in Portugal the highway operator would be liable for that accident.
They'd be liable if even if it was road kill, as they're responsible for ensuring big animals don't go into roads.
A lot of apologists say that "a human would have hit that".
That's kind of irrelevant, this technology is meant to be safer and held to a higher standard.
Comparing to a human is not a valid excuse...
> That's kind of irrelevant, this technology is meant to be safer and held to a higher standard
I don't think that is the case. We will judge FSD whether you make more or less accidents than humans, not necessarily in the same situations. The computer is allowed to make mistakes that a human wouldn't, if in reverse the computer makes a lot less mistakes in situations where humans would.
Given that >90% of accidents are easily avoidable (speeding, not keeping enough safety distance, drunk/tired driving, distraction due to smartphone usages) I think we will see FSD be safer on average very quickly.
> I don't think that is the case.
It's the standard Tesla set for themselves.
In 2016 Tesla claimed every Tesla car being produced had "the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver": https://web.archive.org/web/20161020091022/https://tesla.com...
Wasn't true then, still isn't true now.
> I think we will see FSD be safer on average very quickly.
This is what Musk has been claiming for almost a decade at this point and yet here we are
That's the main advantage self-driving has over humans now.
A self-driving car of today still underperforms the top of the line human driver - but it sure outperforms the "0.1% worst case": the dumbest most inebriated sleep deprived and distracted reckless driver that's responsible for the vast majority of severe road accidents.
Statistics show it plain and clear: self-driving cars already get into less accidents than humans, and the accidents they get into are much less severe too. Their performance is consistently mediocre. Being unable to drink and drive is a big part of where their safety edge comes from.
The statistics on this are much less clear than Tesla would like us to believe. There's a lot of confounding factors, among them the fact that the autonomous driver can decide to hand over things to a human the moment things get hairy. The subsequent crash then gets credited to human error.
That's an often-repeated lie.
Tesla's crash reporting rules:
> To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed.
NHSTA's reporting rules are even more conservative:
> Level 2 ADAS: Entities named in the General Order must report a crash if Level 2 ADAS was in use at any time within 30 seconds of the crash and the crash involved a vulnerable road user being struck or resulted in a fatality, an air bag deployment, or any individual being transported to a hospital for medical treatment.
At highway speeds, "30 seconds" is just shy of an eternity.
Tesla doesn't report crashes that aren't automatically uploaded by the computer. NHTSA has complained about this before. Quoting one of the investigations:
From:https://static.nhtsa.gov/odi/inv/2022/INCR-EA22002-14496.pdf
Even if this were the case, it would still skew the statistics in favor of Tesla: The Autopilot gets to hand of the complicated and dangerous driving conditions to the human who then needs to deal with them. The human to the opposite cannot do the same - they need to deal with all hard situations as they come, with no fallback to hand off to.
I don't think the decision should be or will be made based on a single axis.
> The computer is allowed to make mistakes that a human wouldn't, if in reverse the computer makes a lot less mistakes in situations where humans would.
This subverts all of the accumulated experience of other users on the road about what a car will do, everyone is used to potential issues caused by humans, on top of that other road users will have to learn the quirks of FSD to keep an eye for abnormalities in behaviour?
That's just unrealistic, not only people will have to deal with what other drivers can throw at them (e.g.: veering off lane due to inattention) but also be careful around Teslas which can phantom brake out of nowhere, not avoid debris (shooting it on unpredictable paths), etc.
I don't think we should accept new failure modes on the road for FSD, requiring everyone else to learn them to be on alert, it's just a lot more cognitive load...
> That's kind of irrelevant, this technology is meant to be safer and held to a higher standard.
True, but not even relevant to this specific example. Since the humans clearly saw it and would not have hit it, so we have a very clear example where Tesla is far inferior to humans.
Indeed... You can see the driver reaching for the wheel, presumably he saw it coming, and would have hit the breaks. He left the car to do its thing thinking it knows better than him... maybe.
> presumably he saw it coming
Not presumably, we know for sure since they are talking about it for a long time before impact.
The point of the experiment was to let the car drive so they let it drive and crash, but we know the humans saw it.
Ah, I didn't watch with audio.
A human would not have hit that, the two guys see it coming from a long time and would have stopped or changed lanes like.
Personally if the road was empty as here, I'd have steered around it.
This was really best possible driving conditions - bright day, straight dry road, no other cars around, and still it either failed to see it, or chose to run over it rather than steering around it or stopping. Of all the random things that could happen on the road, encountering a bit of debris under ideal driving conditions seems like it should be the sort of thing it would handle better.
And yet Tesla is rolling out robo taxis with issues like this still present.
It's so clear, dry, perfect lighting, no traffic or anything. That's shocking.
The debris is pretty close to the color of the road. Seems like a good case for radar/lidar ¯\_(ツ)_/¯
Who knew spending more on extra sensors would help avoid issues like this? So weird.
Sensor fusion is too hard is the new cope.
To paraphrase sarcastically: "But what happens when the sensors disagree???"
[Insert red herring about MCAS and cop out about how redundancy is "hard" and "bad" "complexity".]
Have a minimum quorum of sensors, disable one if it generates impossible values (while deciding carefully what is and isn't possible), use sensors that are much more durable, reliable, and can be self-tested, and integration and subsystem test test test thoroughly some more.
Maybe it decided it was a harmless plastic bag or something.
I've made that mistake before and the "plastic bag" cut my brake lines. Now I try to avoid anything in the road that shouldn't be there.
I've heard stories of plastic bags on the highway making their way into the path of the front-facing cameras of vehicles. Resulting in automatic emergency braking at highway speeds.
do we have an example of Lidar equipped car that can avoid that?
Of course. There are plenty of LiDAR demos out there. For a starter : https://www.youtube.com/watch?v=gylfQ4yb5sI BTW: https://www.reddit.com/r/TeslaFSD/comments/1kwrc7p/23_my_wit...
Tesla is essentially the only one that doesn't use lidar. I'd be very surprised if a Waymo had a problem with this debris.
And yet the humans detected it without lidar?
Human eyes are better by most metrics than any camera, and certainly any camera which costs less than a car. Also, obviously, our visual processing is, by most metrics, so much better than the best CV (never mind the sort of CV that can run realtime in a car) that it's not even funny.
they're making fun of Tesla, which stopped putting radar (ed: I misremembered, thanks to the commenter below) in their cars during the pandemic when it got expensive and instead of saying "we can't afford it", claimed it's actually better to not have lidar and just rely on cameras
Tesla has never had LIDAR on production cars, only mapping/ground truth and test vehicles. It was radar that disappeared during the pandemic.
Yeah! Just add more sensors! We're only 992 more sensors away from full self-driving! It totally works that way!
The debris? The very visible piece of debris? The piece of debris that a third party camera inside the car did in fact see? Adding 2 radars and 5 LIDARs would totally solve that!
For fuck's sake, I am tired of this worn out argument. The bottleneck of self-driving isn't sensors. It was never sensors. The bottleneck of self-drivng always was, and still is: AI.
Every time a self-driving car crashes due to a self-driving fault, you pull the blackbox, and what do you see? The sensors received all the data they needed to make the right call. The system had enough time to make the right call. The system did not make the right call. The issue is always AI.
It’s a lot easier to make an AI that highly reliably identifies dangerous road debris if it can see the appearance and the 3D shape of it. There’s a fair bit of debris out there that just looks really weird because it’s the mangled and broken version of something else. There are a lot of ways to mangle and break things, so the training data is sparser than you’d ideally like.
You want the AI to take the camera's uncertainty about a road-colored object and do an emergency maneuver? You don't want to instead add a camera that sees metal and concrete like night and day?
The problem can be sensors, even for humans. When a human's vision gets bad enough, they lose their license.
We had sensors that can beat "a human whose vision got bad enough to get the license revoked" used as far back as in the 2004 DARPA competition.
That got us some of the way towards self-driving, but not all the way. AI was the main bottleneck back then. 20 years later, it still is.
We don't have a bottleneck anymore. We have Waymo. They seemed to have solved whatever the issue was...I wonder what the main difference between the Waymo system and the Tesla system is?
The "main difference" is that Waymo wouldn't even try to drive coast to coast.
Because it's geofenced to shit. Restricted entirely to a few select, fully pre-mapped areas. They only recently started trying to add more freeways to the fence.
You're right, they wouldn't try, but I don't think there's any evidence for the idea that Waymo couldn't pull this trip off now from a technical POV. Even if they're pre-mapping, the vehicles still have to react to what's actually around them.
Adding more sensors (such as LIDAR) can certainly make it easier for the "AI" (Computer Vision) to detect & identify the object.
Respect to these guys for commiting to the bit and letting the Tesla hit it. This is real journalism. Stark contrast to so much of the staged engagement-bait on YouTube.
I noticed that, too. They seemed to tense up and notice it with enough time to spare to manually intervene and either slam on the brakes or swerve to avoid it.
Guy in the video calls it a "girder", but it is almost certainly a trailer ramp: https://www.proxibid.com/lotinformation/54574457/2006-murray...
That was my hunch, but Google Lens was able to ID it. Possible that Waymo vehicles can do this too, but that must take some serious compute and optimization to do at highway speeds.
A lot of photo's on that page, this is the ramp in question:
https://www.proxibid.com/_next/image?url=https%3A%2F%2Fimage...
It's huge, far bigger than it looks in the video. You would have to be asleep at the wheel to miss it.
Thanks for the direct link. That accident could've been so much worse if the ramp had caught an edge under the car. It could easily have flipped it. They were really taking a risk letting the car run over it.
Two more years guys, just give him two more years, a few more billion, a bit more political power and I promise he'll give you your fancy self driving toy. (Repeat from 2012 ad infinitum)
> Two more years guys, just give him two more years, a few more billion, a bit more political power and I promise he'll give you your fancy self driving toy. (Repeat from 2012 ad infinitum)
The trillion dollar pay package will make it happen, that's what was missing.
Solar roofs, Dojo, Hyperloop, robotaxi, roadster, semi, mission around the Moon, man on Mars by 2020, it’s all coming guys he promised!
Boring Company. Whatever happened to those tunnels?
They've only built out the system at the Las Vegas Convention Center, and every other project seems to have fallen through.
Tesla's cant even navigate the tunnels they made themselves without a driver, it's really impressive.
> fallen through
That word choice.
They've only built out the system at the Las Vegas Convention Center
Kinda sorta.
It only operates a few hours a day, and the cars are not self-driving.
It's like a dedicated tunnel for Ubers.
They need to take lessons from the Morgantown PRT, which designed the system they want 50 years ago and whose capabilities they have yet to match.
They did their job: they discouraged the construction of high-speed rail.
Musk has, IIRC, actually admitted that this was their purpose.
No it didn't. The Hyperloop had no impact what so ever on California High Speed Rail.
You're right, it wasn't about high speed rail.
It was about scuttling the expansion of the monorail to the airport.
Musk just picked up after the taxi cartel collapsed.
[flagged]
the problem is that TSLA (the company) is pivoting hard to humanoid robots, which is a relatively easier problem, perfectly big market and Musk is a terrific salesguy. Medium/long term, humanoid robots are commodity but so were electric cars and TSLA (the stock) rode that to glory.
> which is a relatively easier problem
I hope they don’t believe that…
> which is a relatively easier problem
Eh? _Useful_ humanoid robots are if anything considerably harder. Tech demos are easier, granted; humanoid robot tech demos date back to the 80s or so.
If it's not your money does it matter? I don't think it's fair to say he doesn't deliver anything e.g. he did cars, rockets, and now AI (the rate X are building out training capacity is genuinely astonishing) at the same time.
Yes it does matter. Money thrown away on his lies is money that isn’t invested on real projects.
Overall SpaceX is incredibly successful and Tesla is still reasonable successful. So money overall is hardly thrown away.
Success being defined here as "something that makes numbers go up in some capacity"
From a stockholders point of view tesla is a massive success.
From a car design and development point of view, it's a massive waste of lost opportunities.
From a self driving interested person, it's a joke.
Really depends on how you view things, in a purely money in the stock market aspect tesla is doing great.
Well in the same measure of success that we use for every company.
No. They are proxies. But not the truest measures of success.
Ok so how is SpaceX not succcessful. Please explain.
> > Ok so how is SpaceX not succcessful. Please explain.
Only isolated people need Starklink and 95% of people on Earth live in urban cities with pop > 100,000. So it's a product for the 5%
I think the point is that if it's his money he's pissing away, then any other projects the money would have been spent on would have been equally dubious in any case. He's not going to, all of a sudden, become wise simply because he doesn't spend money on what he's spending money on.
Did we give him wayyy too much free money via subsidies? Yes. But that was our mistake. And if we hadn't given it to him, we would have given it to some other scam artists somewhere or other. So even in the case of the counterfactual, we could expect a similar outcome. Just different scumbags.
> And if we hadn't given it to him, we would have given it to some other scam artists somewhere or other.
No we wouldn’t have. Not every dollar we give goes to scam artists. And there are a whole lot of industries and companies far less deceitful.
idk man, I'm not in the matrix deep enough to give a shit about "building out training capacity" of LLMs, I think there are way more important topics like not destroying the fabric of our society and political systems, but idk, I guess I'm just a far left antifa maniac terrorist or something
Yeah we should spend the money on the HR dept instead
SpaceX and the Starship development is heavily funded by tax-payer money...
Easy problem to solve. We just need to train an AI image classifier with highly annotated images of every single possible combination of road debris in every single country in the entire world. Shouldn't take longer than a month, right team? /s
Hey boss, should we set aside the grade crossing barrier recognizer for that?
It seems well documented that the Tesla system is at level 2. and it requires "hands on supervision".
Has Elon lied about the capabilities? Yes, on many occasions.
Crashing your car to prove it seems lilke a waste. When the documentation is clear.
""" Tesla Autopilot is an advanced driver-assistance system (ADAS) developed by Tesla, Inc. that provides partial vehicle automation, corresponding to Level 2 automation as defined by SAE International """ https://en.wikipedia.org/wiki/Tesla_Autopilot
"""
Drive Autonomously: No manufacturer has achieved Level 4 or Level 5 autonomy for road use. Tesla has not achieved Level 3 conditional autonomy like Mercedes-Benz’s DrivePilot system. A Tesla cannot drive itself. The driver must remain attentive at all times. Tesla now qualifies Autopilot and Full Self-Driving with a (Supervised) parenthetical on its website. Tesla’s Level 2 system is a hands-on system that requires the driver to regain control immediately. Torque sensors confirm the driver’s hands are on the wheel or yoke. Level 2 Driving on City Streets: Tesla does list Autosteer on City Streets as a feature of Full-Self Driving. But notably, its website provides no further """ https://insideevs.com/news/742295/tesla-autopilot-abilities-...
""" In a statement addressing the US recall, Tesla declared its technology is a ‘Level Two’ semi-autonomous driving system – not the more advanced ‘Level Three’ system which is already being developed and rolled out by rival car-makers. """ https://www.drive.com.au/news/tesla-full-self-driving-level-...
No manufacturer has achieved Level 4 or Level 5 autonomy for road use.
Waymo has achieved Level 4, with hundreds of thousands of paid rides per week and a stellar safety record. But they're not technically a manufacturer I guess.
I do not understand the official position of Tesla on FSD (Supervised).
The owner's manual Tesla has posted on www.tesla.com says, "Full Self-Driving (Supervised) is a hands-on feature that requires you to pay attention to the road at all times. Keep your hands on the steering wheel at all times..." [0]
The owners of Tesla vehicles are saying that they can just relax and not keep hands on the wheel.
Is Tesla maintaining a more conservative stance while encouraging drivers to do dumb things? Or is the owner's manual just out of date?
0: https://www.tesla.com/ownersmanual/modely/en_us/GUID-2CB6080...
I got my first Tesla last year and my first trip with FSD was just after the launch of V13, so I could not compare it to earlier versions. But I was shocked by how good it was. I completed a 800 miles trip with a handful of interventions, most or all of them likely unnecessary. Even with rain the system worked well. I don’t understand how the system was not able to see this obstacle and slow down or change lanes. Did they have traffic tailgating? I can see some edge cases where there’s really no way to avoid something like this safely. In any case it’s pretty unfortunate and it will make me be even more cautious when using FSD.
It's because anecdotal experience means very little. In ~2021, these vehicles could almost never make it from A to B without a safety-critical intervention.
That they can today doesn't mean they can do the same route 10,000 times without a safety-critical intervention. In fact, public tracking indicates Tesla's numbers are MUCH lower than that.
> Did they have traffic tailgating?
No. If you continue to watch the video after the collision, it's clear that there was no traffic tailgating. They even slowed down and pulled over to the side of the road. No other cars.
I'm unsure why having traffic tailgating even factors into it. If you have to hit the brakes to avoid a collision in front of you, it's not your responsibility to deal with traffic behind you that wasn't at a safe distance; that's all on them.
It’s not like you would expect any inconvenience from being rear-ended or anything so why would you care?
The point is "If you *have* to hit the brakes to avoid a collision in front of you". Obviously if you have a tailgating idiot behind and you can avoid the collision in front without forcing them to rear-end you, then you do that; but that's not always possible, and in that case the blame is then entirely on the person who hit you from the rear, and *not* on you for rear-ending someone.
Also even if you don’t want to get rear ended. You can at least do moderate braking and hit the debris at a slower speed. Humans make these sorts of split second tradeoffs all the time on the highway.
> Obviously if you have a tailgating idiot behind and you can avoid the collision in front without forcing them to rear-end you
That kind of consideration is where having traffic tailgating factors into it. A risk of collision (or a minor collision) may be a better option than a certain (or worse) collision with the tailgating idiot.
When they first released V13, the highway stack was still the old C++ code. It wasn't until another four or five months that they switched it to a neural network. It doesn't seem like they've put much focus on it since then.
Tesla FSD still uses C++. What you're referring to is control policy code that was deleted. FSD also doesn't have a separate "highway stack" and hasn't since v12.
Even with traffic tailgating it would need to brake or go to then right. Your comment sounds like a Tesla ad written by someone from Tesla or a heavy, heavy fan boy.
In part 2 they had to go to a radiator and muffler shop to weld up some parts: https://youtu.be/qYpJF0mxiuo
It’s an accident that could also happen to an inattentive human driver. (Ask me how I know :( )
But, if you watch the car’s display panel, it looks as if the car didn’t see anything and just went full speed ahead. That’s not great.
It should have slowed down and alerted the driver that something was there. I didn’t watch the complete video so maybe there’s more.
If you watch the video, the human non-drivers in the car noticed the road debris and could have reacted, but seemed not to because they were waiting for the computer to respond first.
They were testing FSD so I understand why they would do it. It also looked like roadkill and didn't expect it to be a hard obstacle.
You are correct that they were testing the system, so they probably waited too long to react. But Tesla’s car didn’t do well in this incident
A competent human driver would instinctively slow down, look at the potential obstruction, and think about changing lanes or an emergency stop.
Most probably, the visible object is just some water spilled on the road or that kind of thing. But if it isn’t then it’s very dangerous
This car appeared to be blind to any risk. That’s not acceptable
Heh, just yesterday I drove past some roadkill and wondered what a self-driving car would make of that. Then I started wondering how it would handle deer crossing the road 50 years ahead, or running alongside the vehicle as they sometimes do when they get spooked. Or how it would handle the way people drive down the center of gravel roads until they meet someone, and then move over.
At 56, I don't expect to see it on country roads in my lifetime, but maybe someday they'll get there.
> They were testing FSD
So if that was a human and they ran them over it'd be okay because they were testing FSD?
They're putting themselves (fine) and everyone around them (far less fine) in danger with this stunt.
> It’s an accident that could also happen to an inattentive human driver. (Ask me how I know :( )
Anything "could" happen, but it would take an inordinately inattentive driver to be this bad.
They had 7-8 seconds of staring and talking about the debris before hitting it (or perhaps more, the video starts the moment the guy says "we got eh, a something", but possibly he saw it some instants before that).
So a person would need to be pretty much passed out to not see something with so much time to react.
At 14:40 in the video they show the footage to a cop who's immediate reaction is
"To be honest with you. {talked over} you decide to change lane. How come you, did you think to change lanes or you just kinda froze up in that moment?"
He clearly thought they had plenty of time to react to it.
I thought the title meant the software crashed, but it was the car that crashed: "They didn’t make it out of California without crashing into easily avoidable road debris that badly damaged the Tesla Model Y ... In the video, you can see that the driver doesn’t have his hands on the steering wheel. The passenger spots the debris way ahead of time. There was plenty of time to react, but the driver didn’t get his hands on the steering wheel until the last second."
And if the car was fully self driving and capable of seeing debris that was clearly seen by two humans (whom tesla claims have inferior eyes), it could have taken the opportunity to change lanes and avoid the obstruction.
They want to pretend you'll only need to actually intervene in 'edge case' situations, but here we have an example of perfect conditions requiring intervention. Regardless of the buzzwords they can attach to whatever methods they are using, it doesn't feel like it works.
> whom tesla claims have inferior eyes
... wait, do they actually claim that? I mean that's just nonsense.
They were aware of the debris but intervening would lessen the impact of the video. I'm glad they didn't intervene. This is a bad look for FSD.
The issue with ‘driver in the loop only for emergencies’ is that it’s harder for someone to be prepared to react only occasionally (but still promptly and correctly) when outlier behavior happens, not easier.
For automated tools like CNC machines, there is a reason it’s just ‘emergency stop’ for instance, not ‘tap the correct thing we should do instead’.
But doing an e-stop on a road at speed is a very dangerous action as well, which is why that isn’t an option here either.
And you don't even know until it is too late even if you are paying attention. Suppose you see the obstruction well in advance. You expect the car to change lanes. You get past the point where you would have changed lanes but there's still time. You aren't yet aware that the car is missing the obstruction. A moment later you are much closer and make the judgement that the car is making a mistake. You grab the wheel but now it's too late.
The alternative is taking control of the car all the time. Something nobody is going to practically do.
Yeah, in many ways the only way a human can know for sure that it’s not seeing something is past the point of no return/the absence of action, which is always hard.
And why many "real" self-driving systems that share the road with other users are often quite speed-limited, to allow them to come to a full stop relatively safely if needed, and give a human time to take over properly. (E.g. the Mercedes system will stick way below the usual speed limits, which means it's main use currently is handling heavy traffic with stop-and-go, or many of the autonomous "busses" for dense city spaces/campuses/...)
Passively monitoring a situation continuously and reacting quickly when needed is something humans are not good at. Even with airline pilots, where we do invest a lot in training and monitoring, it's a well-understood issue that having to take over in a surprising situation often leads to confusion and time needed to re-orient before they can take effective action. Which for planes is often fine, because you have the time buffer needed for that, but not always.
There's sort of an analogous conundrum with Airbus-style automation: the system has various levels of automation and protection (e.g. preventing you from stalling the plane)
And then when something goes wrong, you are unceremoniously handed a plane with potentially some or all of those protections no longer active.
As an analogy, imagine your FSD car was trying to slow down for something, but along the way there is some issue with a sensor. So it gives up and hands control back to you while it's in the middle of braking, yet now your ABS is no longer active.
So now the situation is much more sudden than it would have been (if you had been driving the car you would have been aware of it and slowing down for it youself earlier in the game), you likely weren't paying as much attention in the first place because of the automation, and some of the normal protection isn't working.
So it's almost three levels of adding insult to injury. Potentially related discussion: https://news.ycombinator.com/item?id=43970363
I’ll defend airbus a little - there are flight laws that more or less provide at any given moment as much automation as is possible given the state of the sensors and computers. So it doesn’t just go ‘oops, a sensor failed, now you have direct control of the plane.’
It does have the same problem - if 99.999% of your flight time is spent in normal law you are not especially ready to operate in one of the alternate laws or god forbid direct law, which is similar to the case of a driver who perhaps accustomed to the system forget how to drive.
But I think we have a ways before we get there. If the car could detect issues earlier and more gradually notify the driver that they need to take control, most every driver at present retains the knowledge of how to directly operate a car with non-navigational automation (abs as you mentioned, power stearing, etc)
Yeah, it's a tricky problem to solve, but other design decisions exacerbate it too, like the lack of visual or tactile feedback in the controls.
I was thinking of something similar to XL Airways Germany 888T. I was trying to find it and came across this thread making a similar comparison so I'll link that: https://www.reddit.com/r/AdmiralCloudberg/comments/18ks9nl/p...
But I think there was some other example with an engine asymmetry (an autothrottle issue?) that the autopilot was fighting with bank, and eventually it exceeded the bank limit and dumped a basically uncontrollable aircraft in the pilots' lap. It would have been more obvious if you were seeing the yoke bank more and more. (Though it looks like this was China Airlines 006, a 747SP, which contradicts that thought.)
I agree that we can make the situation less abrupt for cars in some cases (though people will probably get annoyed by the car bugging them for everything going on)
It’s important to point out that airline pilots are trained to handle sudden emergencies. This has been incredibly successful at scale. But it came great expense of both money and lost lives. And it still isn’t perfect.
The level of training required to oversee full automation is non-trivial if you have to do more than press a stop button.
Notably, it’s likely harder than… just driving yourself.
I've been using FSD lately on trips from CT to NYC and back (and the car performed well). My biggest fear has been debris (like shown in the video), or deer.
Props to the article for including a gif of the crash so we don’t have to give these people any more views on youtube.
Also that gif is amazing on its own. Def going in the memebank.
I sort of thought this was going to be a head on collision; it was a piece of debris in the highway that I think a lot of people may have hit. Still, we expect autonomous vehicles to do better than people, but it didn't hit a car or injure anyone
I was hoping to learn more about self-driving here but mostly see Elon gossip.
I had to chuckle at the footage (@12:45) of N439B being hauled by on a trailer.
We've lost our sense making ability in our coverage of AV progress. Setting aside Elon's polarizing behavior (which is a big thing to set aside) all of these things seem true:
- Elon bullshits wildly and sometimes delivers
- FSD is far ahead of other available personally owned autonomy - no other system even attempts to drive on surface level streets. FSD works better than you would think - I've been in several flawless rides across town in conditions I didn't expect it to handle.
- FSD doesn't work well enough to rely on it yet, so what's the point for real consumers who don't want to sit there nervously hovering their hands over the wheel even if it is impressive and you might have to ride several hours before anything goes awry.
- We can't really know how close FSD is to being reliable enough because all we have is marketing claims from Tesla, fan boy clips on YouTube, and haters who can't seem to discern where FSD really is ahead, even as it falls far short of hype
What I wish we had was a common metric audited by a third party reliably published in terms of hours until disengagement or something like that across all systems in various conditions.
I'm surprised this isn't flagged yet.
Musk's army must've missed this one.
I rant about Elon a lot but can someone just explain to me how this keeps going on ? FSD is almost completely a solved problem by the likes of Waymo etc. Why does anyone care what Tesla is failing to do with FSD? Is this all about, "how can we invent FSD without lidar"? Why are we bothering, because cybertruck owners don't want a dorky box on top of their truck? Does their truck already not look ridiculous?
FSD isn't just about the lack of lidar, it's the only system that can be employed everywhere, without any prior special mapping, by private owners. So far, there are no other manufacturers of vehicles available in the US who are meaningfully competing in this space.
Its the only system the manufacturer is willing to let it be employed everywhere. I bet Waymo would work better everywhere but they are safety conscious and care about liability.
> I bet Waymo would work better everywhere but they are safety conscious and care about liability.
Except you'd need to map "everywhere" in high-fidelity 3D, save it somewhere in the car, and have it accessible near-realtime. The real reason Waymo can't service "everywhere" is that their approach doesn't scale.
And don't get me wrong - it's clearly a better service where it works (at this point in time), but it'll realistically only ever work in pre-mapped cities. Which of course would remove a ton of drivers and accidents, so still a win.
It's almost entirely a product/economic gamble. Basically:
"How do we get self-driving into millions of regular consumer cars without doubling the price by adding expensive sensors and redesigning the vehicle around huge chunky sensors"
Waymo is focused on Taxis for a reason because it's likely going to be unaffordable, except to people driving Lambos. But that also may be fine for a big chunk of the public who don't want to own cars (or want to rent their own as a service).
Some consumer car companies are experimenting with adding smaller LIDAR sensors like recent Volvo Ex90 which costs ~$100k. But they aren't as sophisticated as Waymo
Is LIDAR full of unobtainium? Is there some fundamental first-principles reason that the cost of LIDAR can't go down by orders of magnitude? Isn't that kind of reasoning from first principles what Elon's genius was supposed to be about, like with cost of accessing space? But apparently he thinks a spinning laser is always going to cost six figures?
That just seems like bad luck for the experiment. I've never seen anything like that driving on the highway. Was the claim that it could go coast to coast no matter what was thrown at it?
Major debris has been a regular part of highway driving throughout my life
Would lidar detect such debris?
It would depend on how dense the point cloud is, however at 100ft I'm guessing the resolution for something like waymo would be on the order of 10-20cm, which would suggest it wouldn't be able to detect it (at a minimum it wouldn't know its height, and thus it wouldn't know if it's a piece of paper or a steel box). My guess is it also wouldn't know until it's right on top of it
Where did you get 10-20 cm? That sounds wildly off. Commercial lidar would get you to ~3-5 cm at 100 feet easily and Waymo's is said to be better than the commercially available units.
Yes. Everything that is/has a volume gets detected. The problem with the camera thing is that the debris doesn't move, is quite small and is dark. I suspect it overran it because of maybe a fix/patch on this behavior: https://www.reddit.com/r/TeslaFSD/comments/1kwrc7p/23_my_wit...
Cameras, like the other lidar-using companies also have, would just like they should here. Teslas problems go beyond a lack of Lidar.
For those following along and curious about the tone and intensity of the criticism against Musk and Tesla (both here and in general) this blog post is a solid framework for understanding what drives many of the responses https://medium.com/incerto/the-most-intolerant-wins-the-dict...
I would try to pass it between the wheels and would crash the same.
At least for me there was nothing indicating there is not enough clearance.
Here is a better picture of the thing you are saying you would "drive over": https://www.proxibid.com/_next/image?url=https%3A%2F%2Fimage...
I’d disagree. :/
It’s a huge chuck metal well over a foot long in the middle of a lane. The center of a lane is usually higher than sides and an uneven patch of road can cause a slight bounce before it, further reducing clearance. I’d worry about my RAV4 (8 inches) clearing it safely.
At first, I thought it was possibly a tire tread, which tend to curl up.
Your eyes work differently when not looking at a screen.
"One test is worth a thousand opinions."
And here, the too-frequently posted excuse that "oh, many humans would have hit that too" is utter nonsense.
In that situation, with light traffic, clear daylight visibility, and wide shoulders, any human who would have hit that is either highly distracted, incompetent, or drunk.
Both driver and passenger saw the object at least 7-8 seconds ahead of time; at 0:00sec the passenger is pointing and they are commenting on the object, at 0:05sec passenger is leaning forward with concern and the Tesla drives over it at 0:08sec. The "Full Self Driving" Tesla didn't even sound any warning until a second AFTER it hit the object.
Any alert half-competent driver would have certainly lifted off the accelerator, started braking and changing lanes in half that time. They didn't because of the expectation the Tesla would take some corrective action — bad assumption.
"My 'Full Self Driving' is as good as a drunk" is not a worthwhile claim.
Worse yet, the entire concept of [it drives and then hands control to the human when it can't handle a situation] is actively dangerous to levels of insanity.
Human perceptual and nervous systems are terrible at tasks requiring vigilance —it is like our brains are evolved for attention to wander. Having a life-critical task that can literally kill you or others ALMOST fully handled autonomously is situation designed for the human to lost attention and situational awareness. Then, demanding in a split second that (s)he immediately become fully oriented, think of a reaction plan, and then execute it, is a recipe for disaster.
In this case, it is even worse. The Tesla itself gave the humans zero warning.
The driver and passenger saw the object well in advance of the Tesla and in in 3-4 times the time and distance it would take to react effectively. But, they had an assumption nothing was wrong because they assumed the Tesla would handle the situation and they were not in a driving mindset, instead waiting to see what the Tesla would do. They were not actively driving the car in the world. Fortunately, the only result was a mangled Tesla — this time.
> Tesla’s EV business is in decline and the stock price depends entirely on the self-driving and robot promises
I don't think that's right; I think the stock price entirely depends on people seeing it a vehicle to invest in Musk. If Musk died tomorrow, but nothing else changed at Tesla, the stock price would crater.
If it were only that TSLA is a way for retail investors to buy shares of ELON, Tesla wouldn't need to do a Robo taxi rollout in Austin to bolster claims that FSD is for real.
Elon can't levitate TSLA and other valuations by himself. There has to be at least the appearance of substance. That appearance is wearing thin. While I'm going to observe the caution that the market can stay irrational longer than I can stay solvent, once reality assert itself, Elon will be powerless to recreate the illusion.
The world’s best selling vehicle means nothing? Extremely safe vehicles mean nothing?
I get that it’s way over hyped, but they have real results that can’t be denied
That would justify a valuation in the same range as other car companies. Not the valuation Tesla has right now.
The best selling model counts for something, but even with that Tesla just isn't a big car company when compared to the other major car companies.
They have the best selling vehicle by a little under 1%, with the Tesla Model Y just edging out the Toyota Corolla. But Toyota also has the 3rd best selling model (RAV4) that is about 7% behind the Model Y. And they have a third model in the top 10, the Camry, at a little over half the Model Y sales.
Just those 3 Toyota models combined sell about 30% more than all Tesla models combined.
Across all models Toyota sells 6 times as many cars as Tesla.
By number of cars sold per year Tesla is the 15th biggest car maker. The list is Toyota, Volkswagen, Hyundai-Kia, GM, Stellantis, Ford, BYD, Honda, Nissan, Suzuki, BMW, Mercedes-Benz, Renault, Geely, and then Tesla.
If we go by revenue from sales rather than units sold it is 12th. The list is: Toyota, Volkswagen, Hyundai-Kia, Stallantis, GM, Ford, Mercedes-Benz, BMW, Honda, BYD, SAIC Motor, and then Tesla.
Yet Tesla has something like 6 times the market cap of Toyota and around 30 times the market caps of VW and Honda. That's pretty much all hype.
So a thing has to be among the “world’s biggest” to be of note?
Of course not.
They don’t make as many vehicles or have the revenue of other auto manufacturers, but who cares.
What they do, they do very, very well. They lead to mass market EV adoption. Even if they crumble tomorrow their contribution is immense. Who cares about market cap, it’s all just gambling.
I actually think the stock price would go up. His detour to fascism and megalomania has chased off tons of liberal environmentalists like myself that are the target audience for electric cars. I cancelled what would have been our replacement Tesla when he was implying on Twitter there can be good reasons for sneaking into the speaker of the house’s house and hitting her husband in the head with a hammer.
I think firing Musk would do wonders for Tesla as going concern but would be a disaster for the stock price.
I suspect a significant proportion of Tesla's stock price comes from people who are using it as a proxy for his other companies that the public can't invest in, primarily xAI (as all AI companies are in a horrific bubble right now) and SpaceX.
It may do this or that on the announcement, but if growth stops (arguably already has) and there is no hype for years, it's likely going to grind down to a normal valuation with time.
The passive investing / market cap weighted ETF complex tends to help big valuations stay big, but a company like Tesla still needs that sharp shot in the arm followed by frenzied buying occasionally in order to stay aloft (be it by traders, retail participants, shorts covering, etc).
I suppose they could replace Musk with another hype salesman, but the "hate" that Tesla gets is a big part of these upside shock cycles for the stock, because the ticker is a siren call for short sellers, who are ultimately guaranteed future buyers.
A thorough analysis of the materials/energy reality we inhabit could lead one to make a similar decision for environmental reasons alone
The issue is that if you look at Tesla as a normal car company (without an iconoclast CEO/personality), then you need to do normal P/E math, which is going to be sub-100 for sure.
Right now it’s easily double to triple that, even with Musks behavior.
Normal P/E math for a car company would put it in more like the 10-20 region - we’re not talking it being double to triple a sensible valuation but seriously something like 15 times…
He definitely has the shareholders by the balls. No doubt about it.
Nah Morgan Stanley and Wedbush would know the gig is up, the fraud would no longer be maintainable without Elon.
Yup. The effect of the reality distortion field has weakened, but remains plenty strong enough for now.
The reality distortion field is at ATH. Tesla stock is nearing its high, which it had when it was growing quickly and very profitable. Now they are shrinking sales and margins and the stock is soaring again
Eh, stock price is based on the hope that the reality distortion field will strengthen again and sales/margins will shoot up.
I guess it really does depend on which reality distortion field we’re talking about haha.
[flagged]
[flagged]
There would be nothing to hit if it were called "Almost self driving but you still have to pay attention", instead of "Full Self Driving".
I honestly don’t know if I would have seen and avoided that, it came up really fast. And based on the video it looked like a cardboard box or something not worth avoiding until it was within 2-3 seconds range.
It may look like that on video, but in fact you can hear the two guys pointing it out and chatting about it a whole 8 seconds before impact.
Also of course you're avoiding an unknown object that large, especially when there's plenty of space to go around it on either side.
If you still don't think you can avoid something like that, please get off the road for everyone's safety.
It's easy to give you credit here—you would have seen it and avoided it. They saw it and had plenty of time to steer to the left, in the open lane, to avoid it.
It was at least six seconds between the co driver pointing it out and the car hitting the object. The screen shows 77mph, which means he saw it from approx. 200m distance. Anybody would at least be able to prepare or slow down in such a situation.
I use Tesla FSD 99% of the time. I recently drove from Austin to Florida and back.
The only time I had to take over was for road debris on the highway. Off the highway it’s very good about avoiding it. My guess is Tesla has not been focusing on this issue as it's not needed for robotaxi for phase one.
Any software that works 90% or even 99% of the time is not safe software especially when it has to be used to operate machines with deadly power.
sounds like you are arguing that humans should not be driving
Humans are way better than 99% at driving.
The fact that it works well in good conditions doesn't make it good. It's the unexpected situations that determine if someone lives or dies.
It doesn't have to do with the conditions. It has to do with whether or not it's on the highway or not. The car uses a different Stack for highway and off highway, the highway has been a step child for quite a long time now
That used to be true. Autopilot was for highways, FSD was for other roads. FSD can be enabled on both since v12 though and this video is specifically an attempt to use FSD on highways to go cross country.
Autopilot and FSD are completely different technologies. When v12 came out it only worked on city streets, and it used V11 for the highway.
Later, V12, which is the end-to-end neural network, worked on highways as well, but they use different stacks behind the scenes.
A 99% reliability is no reliability. So per 100km I should expect 1 issue? Like multiple per week? FSD has to be much better than that to be trustworthy.
To put it another way.
A human would rather be involved in a crash because of its own doing, rather than because they let the machine take control and put trust in it.
Your experience supports the other data, it’s not full self driving.
I never sait it was. That's why it can't drive itself and I need to keep my eyes on the road. The branding is terrible, but the tech is impressive.
> The ùberphilosopher Bertrand Russell presents a particularly toxic variant of my surprise jolt in his illustration of what people in his line of business call the Problem of Induction or Problem of Inductive Knowledge (capitalized for its seriousness)—certainly the mother of all problems in life. How can we logically go from specific instances to reach general conclusions? How do we know what we know? How do we know that what we have observed from given objects and events suffices to enable us to figure out their other properties? There are traps built into any kind of knowledge gained from observation.
> Consider a turkey that is fed every day. Every single feeding will firm up the bird's belief that it is the general rule of life to be fed every day by friendly members of the human race "looking out for its best interests," as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.
The Black Swan: The Impact of the Highly Improbable, Nassim Nicholas Taleb, page 40
What is the point of "testing" something that everyone knows doesn't work and only exists because a serial liar said so? If Musk says you will survive a fall from a 90-stories high-rise... don't test it.
We should rigorously test the things we’re most skeptical of.
You shouldn’t take pills a stranger gives you at a music festival without checking them for loads of things, for example. Even if you don’t ever intend on consuming them it’s nice to have specific accusations with evidence.
Or you can discard them right away. I'm not the police, I don't investigate strangers.
"is a Level 2 driver assistance system that requires constant supervision by a human driver" - the reason for human supervision might have something to do with uncommon situations (debris in road being such a situation).
Elon's estimates have always been off but it is irresponsible to see an obstacle up ahead and assume the computer would do something about it while the driver and passenger debate on what the said obstacle is. I am not sure if they were trying to win a Darwin Award and I say that as no particularly fan of Musk!
Level 2 is very dangerous because it's so good that humans are only needed in emergencies...
The safer the system, the more catastrophic the failures
I assume you spotted the problem? Or are you saying you missed it?
He's saying the more a driver trusts a partially automated driving system the less likely they are to be paying attention themselves.
I misread the ellipsis as sarcasm.